Skip to content
Technology & Innovation

Hawking, Musk Draft a Letter Warning of the Coming AI Arms Race

The threat is real and many scientists and engineers are standing behind them.
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Stephen Hawking and Elon Musk have made their mistrust of artificial intelligence well-known. The two, along with a group of equally concerned scientists and engineers, have released an open letter calling for the prevention of an autonomous robotic army, warning of the disastrous consequences for humanity and future advancements.


Their fears are well-founded, as they write:

“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: Autonomous weapons will become the Kalashnikovs of tomorrow.”

The letter likens AI weaponry to chemical and biological warfare, but with far more ethical concerns and consequences. Once nations start building AI weaponry, there will be an arms race, they write, and it will only be a matter of time until this technology gets into the “hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.” Robots could be programmed to target only select groups of people without mercy or consciousness. It is for these reasons that they “believe that a military AI arms race would not be beneficial for humanity.”

The construction of such weapons would do a disservice to the huge benefits AI could provide humans in the years to come. Building them would only further humanity’s mistrust in such technology, the authors write.

“In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

Theoretical physicist Lawrence Krauss does not share in their concerns, although he understands them. He believes AI, as complex as what’s being described in this letter is still a long way off. “I guess I find the opportunities to be far more exciting than the dangers,” he said. “The unknown is always dangerous, but ultimately machines and computational machines are improving our lives in many ways.”

Four years ago, Peter Warren Singer, senior fellow and director of the 21st Century Defense Initiative at the Brookings Institution, spoke to Big Think about the potential for a robot apocalypse. Well, Singer interviewed a number of scientists about this and found many of them thought it wasn’t possible or a silly idea. However, Singer recalls what one Pentagon scientist said to him; he said, “’You know, I’m probably working on something that’s either going to kill or enslave my grandkids, but, you know, it’s really cool stuff, so why stop.’”

In the pursuit of growth, the architects of these designs need to stop and think about the repercussions of their actions. I believe the exchange between Dr. Ian Malcolm and John Hammond from the movie Jurassic Park says it best:

John Hammond: “I don’t think you’re giving us our due credit. Our scientists have done things which nobody’s ever done before…”

Dr. Ian Malcolm: “Yeah, yeah, but your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.”

Read the full letter at Future of Life.

Photo Credit: JACK GUEZ / Getty Staff

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Related
The Dr. Data Show is a new web series that breaks the mold for data science infotainment, captivating the planet with short webisodes that cover the very best of machine learning and predictive analytics.

Up Next