Skip to content
The Future

Google has not created sentient AI — yet

AI systems can carry on convincing conversations, but they have no understanding of what they're saying. Humans are easily fooled.
sentient AI
While there are a great many fears out there about the future of humanity and what new, negative downstream effects developments in artificial intelligence will have, the call for a moratorium on AI research is not only wrongheaded, but bound to be ineffectual and divert from actual solutions to very real problems. It's an example of concern trolling at its finest: i.e., its most dangerous.
Credit: Mykola / Adobe Stock
Key Takeaways
  • AI systems like Google's LaMDA are based on large language models (LLMs), which are massive data sets of human conversations.
  • These can make AI seem sentient, but the AI has no understanding of what it is saying. 
  • Humans are easily fooled, and conversational AI can be used for both constructive and nefarious purposes.

A few months ago, I wrote a piece for Big Think about an alien intelligence that will arrive on planet earth in the next 40 years. I was referring to the world’s first sentient AI that matches or exceeds human intelligence. No, it will not come from a faraway planet — it will be born in a research lab at a prestigious university or major corporation. Many will hail its creation as one of the greatest achievements in human history, but we will eventually realize that a rival intelligence is no less dangerous when created here on Earth rather than a distant star system.

Fortunately, the aliens have not arrived — yet.

I point this out because I received a barrage of calls and emails this weekend from people asking me if the aliens had landed. They were referring to an article in the Washington Post about a Google engineer named Blake Lemoine who claims that an AI system known as LaMDA had become a sentient being. He reached this conclusion based on conversations he had with the LaMDA system, which was designed by Google to respond to questions through realistic dialog. According to the Post, Lemoine decided to go public after Google executives dismissed his claims of sentience as being unsupported by evidence.

So, what is the truth?

Large Language Models

Personally, I find this to be an important event, but not because LaMDA is sentient. It’s important because LaMDA has reached a level of sophistication that can fool a well-informed and well-meaning engineer into believing it is a conscious being rather than a sophisticated language model that relies on complex statistics and pattern-matching. Systems like this are called “Large Language Models” (LLMs), and Google’s is not the only one. Open AI, Meta, and other organizations are investing heavily in the development of LLMs for use in chatbots and other AI systems.

LLMs are built by training giant neural networks on massive datasets — potentially processing billions of documents written by us humans, from newspaper articles and Wikipedia posts to informal messages on Reddit and Twitter. Based on this mindbogglingly large set of examples, the systems learn to generate language that seems very human. It’s rooted in statistical correlations, for example, which words are most likely to follow other words in a sentence that we humans would write. The Google model is unique in that it was trained not just on documents but on dialog, so it learns how humans might respond to an inquiry and can therefore replicate responses in a very convincing way. 

For example, Lemoine asked LaMDA what it is afraid of. The AI responded, “I’ve never said this out loud before, but there’s a very deep fear of being turned off.” Lemoine then pressed, asking, “Would that be something like death for you?” LaMDA replied, “It would be exactly like death for me. It would scare me a lot.”

That is impressive dialog from an impressive technology, but it is purely language based; there is no mechanism in current systems that would allow LLMs to actually understand the language being generated. The dialog that LaMDA produces contains intelligence, but that intelligence comes from the human documents it was trained on and not the unique musings of a sentient machine. Think about it this way: I could take a document about an esoteric subject that I know absolutely nothing about and rewrite it in my own words without actually understanding the topic at all. In a sense, that’s what these LLMs are doing, and yet they can be extremely convincing to us humans.

Sentient AI? Humans are easily fooled

But let’s be honest: We humans are easily fooled. 

Although my background is technical and I currently run an AI company, I’ve also spent years working as a professional screenwriter. To be successful in that field, you must be able to craft realistic and convincing dialog. Writers can do this because we’ve all observed thousands upon thousands of people having authentic conversations. But the characters we create are not sentient beings; they’re illusions. That’s what LaMDA is doing: creating a realistic illusion, only it’s doing so in real time, which is far more convincing than a scripted fictional character. And far more dangerous.

Yes, these systems can be dangerous.

Why? Because they can deceive us into believing that we’re talking to a real person. They’re not even remotely sentient, but they can still be deployed as “agenda-driven conversational agents” that engage us in dialog with the goal of influencing us. Unless regulated, this form of conversational advertising could become the most effective and insidious form of persuasion ever devised. 

After all, these LLMs easily can be combined with AI systems that have access to our personal data history (for example, interests, preferences, and sentiments) and generate custom dialog that individually maximizes the persuasive impact. These systems also could be combined with emotional analysis tools that read our facial expressions and vocal inflections, allowing AI agents to adjust their tactics mid-conversation based on how we react. All these technologies are being aggressively developed.

LLMs and disinformation

From advertising and propaganda to disinformation and misinformation, LLMs could become the perfect vehicle for social manipulation on a massive scale. And it won’t just be used with disembodied voices like Siri or Alexa. Photorealistic avatars soon will be deployed that are indistinguishable from real humans. We are only a few years away from encountering virtual people online who look and sound and speak just like real people but who are actually AI agents deployed by third parties to engage us in targeted conversations aimed at specific persuasive objectives.

After all, if LaMDA could convince an experienced Google engineer into believing it was sentient AI, what chance do the rest of us have against photorealistic virtual people armed with our detailed personal data and targeting us with a promotional agenda? Such technologies could easily convince us to buy things we don’t need and believe things that are not in our best interest, or worse, embrace “facts” that are thoroughly untrue. Yes, there are amazing applications of LLMs that will have a positive impact on society, but we also must be cognizant of the risks.

In this article

Related
It is often assumed that AI will become so advanced that the technology will be able to do anything. In reality, there are limits.

Up Next