Like any sensible person, I worry about the imminent arrival of Skynet and the Cylons, in the form of artificial intelligences that could be physically, intellectually and perhaps, as J. Storrs Hall argues, even morally superior to us. I just wasn’t ready for robot snakes.
It’s easier, somehow, to worry about robots that look and act like cuddly animals, or like people, as Amy Harmon reported a few weeks ago. That sort of anxiety flatters my moral view of myself: I imagine that I treat others well, so I fret that robots, as they become more like me, will worm their way into my heart.
But the real reason it’s easy to treat robots as people is because we’re used to treating people as robots. The cop who tickets me for an expired parking meter is just as real as I am, but the interaction we have is as mechanical as a light-switch: I am the unit who meets the criteria for a summons; he is the unit who dispenses the summons. I mean, unless you know me personally, do you really care whether I live or die? You’re reading this blog to find out something you didn’t know. I’m the device that provides it. And to me you’re a blip on the hits counter. There is a set of characteristics that defines us as living breathing creatures, but they don’t automatically transfer to our relationships.
Still, people prefer to believe that we could humanize our robo-relationships, if we felt like it. So the tech-support guy gives me his (maybe fake) name, and asks mine; the underpaid drone with the clipboard on the street makes eye contact and asks how I am before nudging me to donate. These are window-dressing around mechanical transactions, which reassure all parties that they’re still human. I have 300 of these support calls to answer today, but if I wanted to I could be your friend.
Why do we need this? I wonder if perhaps the human mind has a problem comprehending indifference. Perhaps we’re innately prone to think, whenever we encounter signs of intelligence, that it’s about us. If a robot or piece of software or civilized slime mold on Betelgeuse or a deity is smart, we assume it will want to trade phone numbers, talk shop and otherwise demonstrate its interest in a relationship to us.
The creep factor in robot-snake videos (including these from Carnegie Mellon’s Biorobotics Lab) could be the absence of any reassurance that intelligence must be interested in us. A robot snake doesn’t look like anything a person would want to be, or hug, or understand. It’s the image of an artificial intelligence that’s neither friendly nor unfriendly, but—worst of all, and hardest to understand—doesn’t care frak-all about us.