Skip to content
The Future

Mind of its own: Will “general AI” be like an alien invasion?

According to surveys, approximately half of artificial intelligence experts believe that general AI will emerge by 2060.
general ai
Credit: THINK b / Adobe Stock
Key Takeaways
  • According to surveys, approximately half of artificial intelligence experts believe that general artificial intelligence will emerge by 2060.
  • General artificial intelligence (also called AGI) describes an artificial intelligence that's able to understand or learn any intellectual task that a human being can perform.
  • Such an intelligence would be unlike anything humans have ever encountered, and it may pose significant dangers.

An alien species is headed toward earth. Many experts predict it will get here within 20 years, while others suggest it may take a little longer. Either way, there is little doubt it will arrive before this century is out and we humans have no reason to believe it will be friendly.     

While I can’t say exactly what it will look like, I am confident it will be unlike us in almost every way, from its physiology and morphology to its psychology and sociology. Still, we will quickly determine it shares two key traits with us humans: consciousness and self-awareness. And while we may resist admitting this, we will eventually conclude that it is far more intelligent than even the smartest among us. 

No, this alien will not come from a distant planet in a fanciful ship. Instead it will be born right here on earth, hatched in a well-funded research lab at a prestigious university or multinational corporation. I am referring to the first general artificial intelligence (AGI) to demonstrate thinking capabilities that exceed our own.  

I know — there are some scientists who believe AGI will not happen for generations, while others suggest it may never be attainable. That said, researchers have surveyed large numbers of AI experts many times over the past decade and nearly half consistently predict AGI will happen before 2060. And with each passing year, the speed of advancements in the field of AI exceeds industry expectations.

Just this month, DeepMind revealed an AI engine called AlphaCode that can write original software at a skill level that exceeds 54% of human programmers. This is not AGI, and yet it took the industry by surprise, as few expected such a milestone to be reached this quickly

So here we are — at a time when AI technology is advancing faster than expected and billions are being invested directly into AGI research. In that context, it seems reasonable to assume that humanity will create an alien intelligence here on Earth in the not so distant future. 

General AI: minds of their own

That first AGI will be hailed as a remarkable creation, but it will also be a dangerous new lifeform: a thoughtful and willful intelligence that is not the slightest bit human. And like every intelligent creature we have ever encountered, from the simplest of insects to the mightiest of whales, it will make decisions and take actions that put its own self-interests first. But unlike insects and whales, this new arrival will compete to fill the same niche we humans occupy at the top of the intellectual food chain.  

Yes, we will have created a rival and yet we may not recognize the dangers right awayIn fact, we humans will most likely look upon our super-intelligent creation with overwhelming pride — one of the greatest milestones in recorded history. Some will compare it to attaining godlike powers of being able to create thinking and feeling creatures from scratch. 

But soon it will dawn on us that these new arrivals have minds of their own. They will surely use their superior intelligence to pursue their own goals and aspirations, driven by their own needs and wants. It is unlikely they will be evil or sadistic, but their actions will certainly be guided by their own values, morals, and sensibilities, which will be nothing like ours.

Many people falsely assume we will solve this problem by building AI systems in our own image, designing technologies that think and feel and behave just like we do. This is unlikely to be the case. 

Artificial minds will not be created by writing software with carefully crafted rules that make them behave like us. Instead, engineers will feed massive datasets into simple algorithms that automatically adjust their own parameters, making millions upon millions of tiny changes to its structure until an intelligence emerges — an intelligence with inner workings that are far too complex for us to comprehend. 

And no: Feeding it data about humans will not make it think and feel like us. This is a common misconception — the false belief that, by training an AI on data that describes human behaviors, we will ensure it ends up thinking and feeling very much like we do. It will not.

Instead, we will build these AI creatures to know humans, not to be human. They will know us inside and out, able to speak our languages and interpret our gestures, read our facial expressions and predict our actions. They will know what makes us angry, happy, frustrated and curious. They will understand how we humans make decisions, for good and for bad, logical and illogical.  After all, we will have spent decades teaching them how we act and react.

But still, their minds will be nothing like ours. And while we have two eyes and two ears, they will have godlike perceptual capabilities, connecting remotely to sensors of all kinds, in all places, until they seem nearly omniscient to us. In my 2020 picture book on this topic, Arrival Mind, I portray the first AGI that we create as “having a billion eyes and ears,” for it will have instant access to data from all over the world. What I didn’t point out is that we will still interact with this alien through a body that looks very human, with two eyes and two ears and a face that smiles. We will give it this appearance to make ourselves more comfortable. 

Think about that — when this alien finally invades, humans will work to hide its true nature in a friendly-looking shell. We will even teach it to mimic our feelings, expressing sentiments like “puppies are cute,” and “life is precious,” not because it necessarily shares these human-like feelings, but because it will be skilled at making itself seem human to us.  

As a result, we won’t fear these aliens — not the way we would fear a mysterious starship speeding toward us. We may even feel a sense of kinship, viewing these aliens as an offshoot of our own ingenuity. But if we push those feelings aside, we start to realize that an alien intelligence born here is likely far more dangerous than those from afar. 

After all, the aliens we build here will know everything about us from the moment they arrive, having been trained on our wants, needs, and motivations, and able to sense our emotions, predict our reactions, and influence our opinions. If a species heading toward us in flying saucers had such abilities, we’d be terrified. 

Smarter faster: the Big Think newsletter
Subscribe for counterintuitive, surprising, and impactful stories delivered to your inbox every Thursday

AI can already defeat our best players at the hardest games on earth. But really, these systems don’t just master the games of Chess, Poker, and Go. They also master the game of humans, learning to anticipate our actions and exploit our weaknesses. Researchers around the world are training AI systems to out-plan us, out-negotiate us, and out-maneuver us.  

But at least we won’t have to worry about a physical battle between us and them. That’s because we will have surrendered control of our world before they even show up. We’re already starting to hand over critical infrastructure to AI systems, from communication networks and power grids to water and food supplies. And as humanity transitions to spend more and more time in the simulated “metaverse,” we will become even more susceptible to manipulation by AI technologies.

Unfortunately, we can’t prevent AI from getting more powerful, as no innovation has ever been contained. And while many researchers are working on safeguards, we can’t assume this will eliminate the threat. In fact, a recent poll by Pew Research indicates that few professionals believe the industry will implement “ethical AI” practices by 2030. 

How should we prepare? 

I believe the best first step is for the public to accept that AGI will likely happen in the not-so-distant future and it will not be a digital version of the human mind, but something far more alien. If we think of the threat this way, picturing it as a fleet of ships that will intercept Earth in 20 or 30 years, we might prepare with more urgency.  

To me, that urgency means pushing for regulation of AI systems that are designed to monitor and manipulate the public. Such technologies may not seem like an existential threat today, as they’re currently being deployed for AI advertising instead of world domination. But still, AI technologies that track our sentiments, behaviors, and emotions with the intention of swaying our beliefs — are very dangerous. 

The other area of concern is the aggressive drive to automate human decisions with AI. While it’s undeniable that AI can assist greatly in effective decision-making, we should always keep humans in the loop. As I described in a TEDx talk on this topic a few years back, I firmly believe that researchers should focus more on using AI to assist and enhance human intelligence rather than working to replace it. 

This has been my focus over the last eight years and research suggests it’s a fruitful direction. For example, a study published in collaboration with Stanford medical school showed we can use AI to connect small groups of doctors into “super-experts” that can make diagnoses with significantly fewer errors. We’ve seen similar benefits across many applications from the United Nations using the technology to forecast famines, to business teams making smarter predictions and estimations.  

Whether we prepare or not, the aliens are coming. And while there is an earnest effort in the AI community to push for safe technologies, there is also a lack of urgency. That’s because too many of us wrongly believe that a sentient AI created by humans will somehow be a branch of the human tree, like a digital descendant that shares a very human core. Unfortunately, that is wishful thinking. It is far more likely that an AGI will be profoundly different from us in just about every way. Yes, it will be skilled at pretending to be human, but beneath that façade it will think and feel and act like no creature we have ever encountered on Earth.


Related

Up Next