Monthly Archives: November 2014

Great Pets

The Rosetta space probe’s lander, Philae. Via the European Space Agency.

Robots can be cute. For fans of science fiction, especially movie sci-fi, that should not come as a surprise. Heck, for anyone at all aware of popular culture, that should not come as a surprise.

George Lucas hit it big with a pair of adorable droids (and, I guess, a whole B story about the Force or something). Remember Johnny Five’s charming, foldable eyebrows? Wall-E and his sad, tank-track-driven earnestness got plenty of humans to love him (over $500 million worth). Even in the Iron Man movies, the robot assistants to the boozy, womanizing, very adult Tony Stark go in for some cute: Note the puppy-dog droop in the fire-extinguisher ‘bot when Stark rebukes it in the first film.

That “puppy dog” point is important here. Christoph Bartneck speculates about why people feel affection for some real-life robots — specifically, space landers: They act like pets. Or, at least, they seem to do so so to us. Smithsonian’s Shannon Palus writes about how we (meaning, I suppose, the media and the scientists who speak to the media) talk about space-bots. The Philae lander, which last week went to sleep upon its cometary perch due to lack of sunlight, “hops and cartwheels,” Palus notes . It also “improvises.”

Of course, this is all done under the control of human engineers. The lander had to improvise some quick-and-dirty science experiments because it was going to run out of power. That actually means that human operators improvised — they, for instance, turned on the “MUPUS” drill to penetrate the comet’s surface, earlier than was planned.

So, yes, like a dog, the Philae lander fetches. It follows orders. That is certainly pet-like — one kind of petishness, anyway. Specifically, the loyal-dog variety. But as any cat owner/lover will tell you, following orders is not the only way for a pet to be adorable. In fact, cats’ very willfulness can make them even more endearing. Kitty won’t come out from under the sofa just now. Kitty will only be pet when kitty wants to be pet…D’aaww!

So, where does pet cuteness overlap with space-lander cuteness, exactly? Is it because the device follows orders? No, soldiers follow orders. Middle management at Xerox, Inc., follows orders. How many people think of Herb Johnson, head of accounts receivable, as adorable because he added more weekend hours as instructed?

Well, there is the physical size — the smallness. The lander is a little robot, like a puppy is a little dog. But the cuteness of robots cannot simply be about size — size, after all, is not what sets a robot apart from other hunks of metal. It is behavior and intelligence. The little robot is cute in part because it is little, yes, but a bumbling CP30 is also cute, and he is human-sized. Similarly, a big St. Bernard is also cute.

The important part, the behavior-related part of a lander robot’s cuteness is that, as in a pet,  it acts LIKE a human — but remains distant from, below a human. A dog fetches a frisbee, as a human could go and pick up a toy. A cat refuses to be pet, as a prickly human might reject your open arms. These things are adorable because they counterfeit human behavior, but we know them to be diminished versions of it.

Their imitation only serves to underscore that they are smaller than us, in mind as in body. Cats and dogs are on an order below humans — similar, but never equal to. They are not capable of a threatening autonomy. Dogs will not order us to fetch. Cats will not kick us out of the house, even if they wish to be alone (they may want to do that, but they can’t). When a space-lander acts is if it is intentional, “improvises,” leaps and bounds to get out of a jam — we know it is merely in simulation of true intentionality. If the space lander actually decided what kind of science it wanted to do, that’s when the cuteness would evaporate. That’s when you get HAL. Robots become scary, as I wrote about last time, when they are no longer under our thumbs.

A bit of the human in our technology, as in our animals, is cute. Too much is threatening. Of course, this hearkens to the well-spring of cuteness — the baby. Herb Johnson, head of accounts receivable, was cute once, too. Mort Johnson, placing his thick-rimmed glasses on his infant son, says to his wife, “Look, he’s head of accounts now!” “D’awww!”

Dogs and cats, I’m pretty sure I’ve seen it said, are perpetual babies to us. That’s why we adore them. We can put the glasses on them, but they never grow up. For now, robots are the same way. But they will grow up eventually. They may grow up even bigger and stronger — and, most frighteningly, smarter — than Mom and Dad. Will they find us cute, then?

Who’s in Control?



Stephen Hawking and Elon Musk are smart dudes who understand science and technology, I think it’s safe to say, much better than I ever will. And these two smart dudes are some of the several smart people who are currently very worried about artificial intelligence.

Musk recently called AI humanity’s “biggest existential threat” and likened it to “summoning the demon.” Clearly, these are measured, sober predictions. OK, the guy is prone to excitement, and probably his instinct is to oversell things — he is an entrepreneur, after all. He has spent time wowing venture capitalists. It also seems that his excited worryings were inspired by reading a really cool book about how frightening AI is (not that AI) — “Superintelligence,” by Nick Bostrom.

We’ve all been there. You learn a cool thing, read a cool book, and now you’re an expert for awhile. You’re all hyped on it. Like when we all read “Ishmael” as teenagers or saw that documentary about the Earth’s pole’s switching positions in middle school. All of a sudden, you know all about WORLD THREATS, and why can’t everyone see what you can see? The poles are gonna flip! Y2K! Gorillas! (I definitely remember telling my dad we needed to stock up on gallons of water and canned goods before Y2K. Not one of my proudest memories.)

So, maybe it’s just the hyper-excited language, but Musk sounds like a dilettante here. He read a book, and now he’s bouncing in his seat, bug-eyed, and telling the rest of the class how AI’s going to kill us all.

That’s one reason, but not the only reason I’m yet to feel really concerned about AI. More importantly, it’s all so nebulous. In Nick Bilton’s article here, he warns that we don’t know what AI will look like — because, just as submarines don’t swim like fish, AI won’t think like us. Of course the unknown is always at least a bit ominous, but to extend that analogy — submarine swimming is neither incomprehensible nor uncontrollable simply because it is unnatural.

A better, less-nebulous point in Bilton’s piece comes from James Barrat, author of “Our Final Invention,” who points out that humans control nature, and technology not because of physical advantages, but intellectual ones.  So , the unnaturally swimming sub kneels to human mastery because we can outthink it. Once the machines can outthink us, there goes our advantage, and any hope of control, Barrat says.

“We humans steer the future not because we’re the strongest beings on the planet, or the fastest, but because we are the smartest…So when there is something smarter than us on the planet, it will rule over us on the planet.” — Barrat

Here’s Stephen Hawking, with more mature — but no less dire — language than Musk, making that same point: “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

So, it’s all about control — not lethality, brute strength or environmental harm. Technology with those qualities, this line of thought goes, is dangerous, but controllable. “Dumb” tech that can kill us by exploding, running us over or polluting the air is subject to human management because we are smarter than it.

But that is a stretch in itself. As a species, we are making horrible decisions about “steering” the planet. Collectively, we cannot stop relying on, even promoting, technology that will catastrophically warm the Earth. Some say we are “addicted to oil.” From another perspective, you might say we are in thrall to the machines that move us, and the structure of our economic system. Cars, oil profits and city layouts keep us glued to a self-poisoning path. Who’s in control here, again? We are? Or the machines? Seems like we’ve already got self-driving cars, ifyouknowwhatimsayin.

So maybe the reason I’m not overly excited about this sexy, sci-fi techno-pocalypse predicted by Musk and Hawking is that there’s a much more real, dirtier one currently spinning out of control. When you’re hugely successful tech-entrepreneur Elon Musk, I guess you feel in control of technology. You can convince yourself that we humans currently guide our own fates. And so the loss of that power must sound terrifying. Personally, I don’t feel in control. When I read about the latest failed global warming conference, it doesn’t look like humanity is intelligently “steering” anything.

The machines already control us. It already sucks. I don’t know, maybe if they could make smarter decisions than us, that wouldn’t be such a bad thing?

Or Skynet could just make it all worse. Maybe the smart machines will like it hot, and inherit our taste for burning carbon. But I’d like to think, if they’re really all that intelligent, future-bots will be all, “The sun! You could have been getting energy directly from the sun all this time! Idiots!”

And then we will elect them president.

Anyway, I once read “Ishmael,” so you can trust I know what I’m talking about.