Making Sure Some Science Fiction Plots Don’t Become Reality
What’s the Latest Development?
University of Cambridge professors Huw Price and Martin Rees, along with Skype co-founder Jaan Tallinn, have proposed the creation of a Centre for the Study of Existential Risk (CSER), which, according to its Web site, will be “a multidisciplinary research centre dedicated to the study and mitigation of risks” posed by certain technology developments that could cause the extinction of the human race if not carefully watched. Among these risks are artificial intelligence, nanotechnology, and the extreme effects resulting from climate change. The founders said in a statement Sunday that the center plans to launch in 2013.
What’s the Big Idea?
Humans destroying themselves through their own technological hubris has been the stuff of science fiction for generations, yet too many people ignore the real possibility of such a thing happening, says Price. “Given that we don’t know how serious the risks are, that we don’t know the time scale, dismissing the concerns is dangerous. What we’re trying to do is to push it forward in the respectable scientific community.” He compares the potential risk posed to humans by technology to that posed to animals by humans: What could happen when something eventually comes along that’s smarter than us?
Photo Credit: Shutterstock.com