Incognito Supercomputers and the Singularity
Thanks to rapid advancements in artificial intelligence and computer processing ability, machines are now evolving faster than humans. At some point within the next decade, according to proponents of the Singularity, machines will become so intelligent that they will start making decisions for us in ways that we could never imagine or understand. The conventional wisdom, of course, is that humans will be able to recognize that day and adjust things accordingly so that we’re still in charge here on Earth. (Isn’t that what the Turing Test was for?) As the line between human intelligence and machine intelligence continues to blur, though, it is no longer so obvious that we will we be able to recognize that day when our machines are smarter than us. The supercomputers will be in control and incognito.
Being relegated to the #2 intelligent species on the planet by unrecognizable supercomputers could be a downright scary proposition. At least that’s the provocative thesis that some – like Jaan Tallinn (a founding engineer of Skype and Kazaa) – have put forth. As Tallinn explains it, all of these rapid gains in computer intelligence have a drawback: computers may decide that they are smarter than humans and start to make decisions for us in ways that threaten our future existence – in much the same way that humans have been making decisions for other species on Earth with all of our relentless industrialization. For example, computers could decide to start terraforming the Earth in order to protect future generations of humans – or they could decide that what’s good for our brains (eternal smiles and everlasting happiness) is just not possible with our bodies as currently configured.
The Singularity is not something to be feared – just something to be respected. Just like Prometheus and the gift of fire – you have to respect the gift and understand what it means for the future evolution of mankind. Computing super-intelligence is like the fire from the gods, and when you play with fire… well, you get the idea.
We can see signs of computing super-intelligence all around us, every day – but only in situations that humans have artificially created. For example, take chess. Chess is a human-created game with human-created rules – so we can easily judge machine intelligence by the degree to which computers are able to defeat humans in the game of chess. The same is true of a game like Jeopardy! – we have a clear and easy way to judge whether computers are superior to humans because there is a single outcome and an easily determined winner. Even in simple things – like asking a mobile phone app to choose the optimized path to drive to work – we may admit that computers are superior to us, but don’t feel much threatened by it, because there’s not much at stake and we can recognize when something doesn’t make sense.
But we are starting to use computers for situations other than chess or automated driving decisions, where the outcomes are not so clear-cut, and where humans have something very clear to lose or gain by a computer’s decision. When it comes to medicine, for example, the same artificial intelligence engine used to play chess is now being used to consult on diseases. Your future health could be intertwined with a future decision made by a bunch of silicon. Or, take the financial markets. Computerized algorithmic trading means that computers can trade faster and more effectively than humans, putting the fate of billion-dollar corporations literally in the (silicon) hands of computers. The idea of computers rationally and effectively figuring out the proper valuation of every business in the world may sound exhilarating – until you run into a “flash crash” every now and then.
But maybe all those “flash crashes” and other hiccups are actually good things – they are subtle clues that humans can still recognize the machines around them. One day, however, humans may not be able to recognize when computers are making the decisions for us: “If you build machines that understand what humans are and they really have some distorted view of what we want, then we might end up being alive but not controlling the future.” Or, scarier even than the prospect of incognito supercomputers, we may not be able to understand why these computers are making certain decisions because they are just so much smarter than us. When that day comes, most of humanity may already be living inside the Matrix — with the rest of us headed out to explore deep space with David and the next generation of Weyland robots on the Prometheus.
image: Cyberman at Computer / Shutterstock