Stephen Hawking and Elon Musk are smart dudes who understand science and technology, I think it’s safe to say, much better than I ever will. And these two smart dudes are some of the several smart people who are currently very worried about artificial intelligence.
Musk recently called AI humanity’s “biggest existential threat” and likened it to “summoning the demon.” Clearly, these are measured, sober predictions. OK, the guy is prone to excitement, and probably his instinct is to oversell things — he is an entrepreneur, after all. He has spent time wowing venture capitalists. It also seems that his excited worryings were inspired by reading a really cool book about how frightening AI is (not that AI) — “Superintelligence,” by Nick Bostrom.
We’ve all been there. You learn a cool thing, read a cool book, and now you’re an expert for awhile. You’re all hyped on it. Like when we all read “Ishmael” as teenagers or saw that documentary about the Earth’s pole’s switching positions in middle school. All of a sudden, you know all about WORLD THREATS, and why can’t everyone see what you can see? The poles are gonna flip! Y2K! Gorillas! (I definitely remember telling my dad we needed to stock up on gallons of water and canned goods before Y2K. Not one of my proudest memories.)
So, maybe it’s just the hyper-excited language, but Musk sounds like a dilettante here. He read a book, and now he’s bouncing in his seat, bug-eyed, and telling the rest of the class how AI’s going to kill us all.
That’s one reason, but not the only reason I’m yet to feel really concerned about AI. More importantly, it’s all so nebulous. In Nick Bilton’s article here, he warns that we don’t know what AI will look like — because, just as submarines don’t swim like fish, AI won’t think like us. Of course the unknown is always at least a bit ominous, but to extend that analogy — submarine swimming is neither incomprehensible nor uncontrollable simply because it is unnatural.
A better, less-nebulous point in Bilton’s piece comes from James Barrat, author of “Our Final Invention,” who points out that humans control nature, and technology not because of physical advantages, but intellectual ones. So , the unnaturally swimming sub kneels to human mastery because we can outthink it. Once the machines can outthink us, there goes our advantage, and any hope of control, Barrat says.
“We humans steer the future not because we’re the strongest beings on the planet, or the fastest, but because we are the smartest…So when there is something smarter than us on the planet, it will rule over us on the planet.” — Barrat
Here’s Stephen Hawking, with more mature — but no less dire — language than Musk, making that same point: “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
So, it’s all about control — not lethality, brute strength or environmental harm. Technology with those qualities, this line of thought goes, is dangerous, but controllable. “Dumb” tech that can kill us by exploding, running us over or polluting the air is subject to human management because we are smarter than it.
But that is a stretch in itself. As a species, we are making horrible decisions about “steering” the planet. Collectively, we cannot stop relying on, even promoting, technology that will catastrophically warm the Earth. Some say we are “addicted to oil.” From another perspective, you might say we are in thrall to the machines that move us, and the structure of our economic system. Cars, oil profits and city layouts keep us glued to a self-poisoning path. Who’s in control here, again? We are? Or the machines? Seems like we’ve already got self-driving cars, ifyouknowwhatimsayin.
So maybe the reason I’m not overly excited about this sexy, sci-fi techno-pocalypse predicted by Musk and Hawking is that there’s a much more real, dirtier one currently spinning out of control. When you’re hugely successful tech-entrepreneur Elon Musk, I guess you feel in control of technology. You can convince yourself that we humans currently guide our own fates. And so the loss of that power must sound terrifying. Personally, I don’t feel in control. When I read about the latest failed global warming conference, it doesn’t look like humanity is intelligently “steering” anything.
The machines already control us. It already sucks. I don’t know, maybe if they could make smarter decisions than us, that wouldn’t be such a bad thing?
Or Skynet could just make it all worse. Maybe the smart machines will like it hot, and inherit our taste for burning carbon. But I’d like to think, if they’re really all that intelligent, future-bots will be all, “The sun! You could have been getting energy directly from the sun all this time! Idiots!”
And then we will elect them president.
Anyway, I once read “Ishmael,” so you can trust I know what I’m talking about.