Last Sunday, a particularly unusual DotA 2 tournament took place. DotA, a complicated, real-time strategy game, is among the most popular e-sports in the world. The five players of one team—Blitz, Cap, Fogged, Merlini, and MoonMeander—were ranked in the 99.95th percentile, inarguably among the best DotA 2 players in the world. However, their opponent still defeated them in two out three games, winning the tournament. An evenly matched game is supposed to take 45 minutes, but these two were over in 14 and 21 minutes, respectively.
Their opponent was a team of five neural networks developed by Elon Musk’s OpenAI, collectively referred to as OpenAI Five. Prior to Sunday’s tournament, the neural network played 180 years’ worth of DotA matches against itself every day, edging incrementally closer to mastery over the game. The reason why its creators chose DotA as OpenAI Five’s focus was to mimic the incredibly variable and complex nature of the real world; DotA is a complicated game, and if an A.I. is going to be able to process and interact with the world rather than, say, learn to plot a GPS course or play chess, open-ended video games are a good place to start.
While this is an impressive technical achievement on its own, Musk’s victory tweet highlighted that this was just a stepping stone toward the future of A.I.
The neural interface Musk was referring to is being developed by Neuralink, another one of his startups. Neuralink's purpose is to develop a brain-machine interface (BMI) between the human mind and a generalized A.I.—essentially, an A.I. capable of thinking in general rather than thinking about how best to win a DotA match. By developing this interface between the brain and an A.I., Neuralink hopes to both augment human capabilities and prevent one of Musk's often invoked concerns about the future: an immortal, evil, dictatorial A.I.
In a Y Combinator interview, Musk explained that his fear is that only one company or small set of individuals will have control over future A.I. technologies.
"I think that's very dangerous. It could also get stolen by somebody bad, like some evil dictator or country could send their intelligence agency to go steal it and gain control. It just becomes a very unstable situation, I think, if you've got any incredibly powerful A.I. … The best of the available alternatives that I can come up with—and maybe someone else can come up with a better approach or better outcome—is that we achieve democratization of A.I. technology."
In order to accomplish this democratization, Musk founded OpenAI, whose technology is completely available to the public and was used to build the DotA-playing OpenAI Five.
His second method of democratizing A.I. is through Neuralink. By connecting the human mind with cloud-based A.I., the theory is that we'll have the capabilities necessary to contend with any centralized, malevolent A.I.s out there.
69% of Americans are more worried than excited about brain chip implants for increased cognitive abilities. Source: Survey of U.S. adults conducted March 2-28, 2016. Pew Research Center.
"[Elon Musk] believes that the solution to reduce existential risk is to be able to high bandwidth interface with AI. He thinks that if we can think with AI, it allows AI to function as a third layer in our brain, where we could have AI that's built for us ...
"That sounds kind of creepy but it makes sense if all of us are AI, there's not really anyone that can get control over all the AI in the world, monopolize it, and maybe do bad things with it because they are contending with a millions and billions of people who have access to AI. It's much safer in a weird way, even though it gives us all a lot more power. It's like you don't want one Superman on earth, but if you have a billion Supermen then everything is okay because they check and balance each other."
This man/machine symbiosis might not be as crazy as it sounds. BMIs already exist as computer chips embedded in the brain that deliver targeted electrical impulses to help treat neurological disorders, like Parkinson's or epilepsy. In a recent study, three monkeys were implanted with BMIs that shared their individual sensory and motor information and sent signals to a mechanical arm. The monkeys were able to create a kind of shared neural net and move the mechanical arm in 3D space.
A brain-machine interface senses the electrical current and blood flow of this man's brain so he can move Honda's humanoid robot, Asimo. (Photo by YOSHIKAZU TSUNO/AFP/Getty Images).
Furthermore, the kind of interface that Neuralink will develop already exists in a primitive form. “We have a digital tertiary self in the form of our email capabilities, our computers, phones, applications," Musk said. “We're practically superhuman, [but] we're extremely bandwidth-constrained in that interface between the cortex and that tertiary digital form of [ourselves]." While our devices might give us access to much more information and much greater capabilities, we still have to use our fingertips to type, our eyes to take in alphabetic symbols, extra computing power in our brains to decode those symbols, and so on, which slows us down significantly.
As with everything Musk does, this is very bold and forward-looking work, but it may not be taking into account the full ramifications of a BMI that links the human mind to a cloud-based A.I. Some researchers argue that an A.I./human symbiote will help forestall the singularity—the moment when an A.I. becomes as intelligent as a human being—by essentially moving the goalposts. If humans can augment themselves by connecting to an A.I., then it forestalls the moment when an A.I. is more intelligent than a human.
However, there are other concerns directly tied to the particular approach Musk is taking here. The Morningside Group, an organization composed of neuroscientists, neurotechnologists, ethicists, and machine-intelligence engineers, describes several ethical concerns regarding the intimate connection of the human mind with a cloud-based A.I. First is privacy and consent. Consider all of the allegations surrounding Facebook's collection of data. Even if a future human/A.I. symbiote is open source and decentralized, whose personal data will be subsumed into the cloud? How will and how can one keep control over personal data in this scenario?
Agency and identity is another problematic issue. That isn't to say that others hooked up to the cloud might learn your identity; you might lose your sense of self entirely. If everyone can interface with a cloud-based intelligence, an individual's intelligence might cease to mean anything.
There is also the question of how this augmentation will be used in society. Of particular concern is the idea that A.I.-enhanced human beings could be used in warfare, and a new augmentation arms race could begin.
What's more, the biases inherent to our society tend to be adopted in the technologies we create. Google shows lower-paying job ads to women, and algorithms used by U.S. law enforcement overwhelmingly predict that black offenders will re-offend compared to white offenders accused of the same crime. It's possible that an A.I. would be able to objectively sidestep these biases, but that shouldn't be assumed to be the case.
The trouble with all of this is that there is simply no objective way to know how A.I. will impact our society. It's a shift that will be too radical for us to comprehend. At the same time, these changes will happen, and we must make preparations. Through OpenAI and Neuralink, Musk appears to be doing so by determining the course of A.I. development from its inception.