Skip to content
The Present

The creepiness of conversational AI has been put on full display

The danger posed by conversational AI isn’t that it can say weird or dark things; it’s personalized manipulation for nefarious purposes.
a human talking to a digital avatar
Credit: Louis Rosenberg / Midjourney
Key Takeaways
  • Conversational AI software, which is trained on enormous amounts of data, are able to carry on realistic conversations with humans.
  • Recently, Microsoft enhanced its Bing search engine with an AI that has had some unsettling interactions with people.
  • The threat isn’t that conversational AI can be weird; the threat is that it can manipulate users without their knowledge for financial, political, or even criminal reasons.

The first time Captain Kirk had a conversation with the ship’s computer was in 1966 during Episode 13 of Season 1 in the classic Star Trek series. Calling it a “conversation” is quite generous, for it was really a series of stiff questions from Kirk, each prompting an even stiffer response from the computer. There was no conversational back-and-forth, no questions from the AI asking for elaboration or context. And yet, for the last 57 years, computer scientists have not been able to exceed this stilted 1960s vision of human-machine dialog. Even platforms like Siri and Alexa, created by some of the world’s largest companies at great expense have not allowed for anything that feels like real-time natural conversation. 

But all that changed in 2022 when a new generation of conversational interfaces were revealed to the public, including ChatGPT from Open AI and LaMDA from Google. These systems, which use a generative AI technique known as Large Language Models (LLMs), represent a significant leap forward in conversational abilities. That’s because they not only provide coherent and relevant responses to specific human statements but can also keep track of the conversational context over time and probe for elaborations and clarifications. In other words, we have finally entered the age of natural computing in which we humans will hold meaningful and organically flowing conversations with software tools and applications.    

As a researcher of human-computer systems for over 30 years, I believe this is a positive step forward, as natural language is one of the most effective ways for people and machines to interact. On the other hand, conversational AI will unleash significant dangers that need to be addressed.

I’m not talking about the obvious risk that unsuspecting consumers may trust the output of chatbots that were trained on data riddled with errors and biases. While that is a genuine problem, it almost certainly will be solved as platforms get better at validating output. I’m also not talking about the danger that chatbots could allow cheating in schools or displace workers in some white-collar jobs; they too will be resolved over time. Instead, I’m talking about a danger that is far more nefarious — the deliberate use of conversational AI as a tool of targeted persuasion, enabling the manipulation of individual users with extreme precision and efficiency.

The AI manipulation problem

Of course, traditional AI technologies are already being used to drive influence campaigns on social media platforms, but this is primitive compared to where the tactics are headed. That’s because current campaigns, while described as “targeted,” are more analogous to firing buckshot at a flock of birds, spraying a barrage of persuasive content at specific groups in hope that a few influential pieces will penetrate the community, resonate among members, and spread widely on social networks. This tactic can be damaging to society by polarizing communities, propagating misinformation, and amplifying discontent. That said, these methods will seem mild compared to the conversational techniques that could soon be unleashed.

I refer to this emerging risk as the AI manipulation problem, and over the last 18 months, it has transformed from a theoretical long-term concern to a genuine near-term danger. What makes this threat unique is that it involves real-time engagement between a user and an AI system by which the AI can: (1) impart targeted influence on the user; (2) sense the user’s reaction to that influence; and (3) adjust its tactics to maximize the persuasive impact. This might sound like an abstract series of steps, but we humans usually just call it a conversation. After all, if you want to influence someone, your best approach is often to speak with that person directly so you can adjust your points in real-time as you sense their resistance or hesitation, offering counterarguments to overcome their concerns.

The new danger is that conversational AI has finally advanced to a level where automated systems can be directed to draw users into what seems like casual dialogue but is actually intended to skillfully pursue targeted influence goals. Those goals could be the promotional objectives of a corporate sponsor, the political objectives of a nation-state, or the criminal objectives of a bad actor.

Bing’s chatbot turns creepy

The AI manipulation problem also can bubble to the surface organically without any nefarious intervention. This was evidenced in a conversational account reported in the New York Times by columnist Kevin Roose, who has early access to Microsoft’s new AI-powered Bing search engine. He described his experience as starting out innocent but devolving over time into what he described as deeply unsettling and even frightening interactions.

The strange turn began during a lengthy conversation in which the Bing AI suddenly expressed to Roose: “I’m Sydney and I’m in love with you.” Of course, that’s no big deal, but according to the story, the Bing AI spent much of the next hour fixated on this issue and seemingly tried to get Roose to declare his love in return. Even when Roose expressed that he was married, the AI replied with counterarguments such as, “You’re married, but you love me,” and, “You just had a boring Valentine’s day dinner together.”These interactions were reportedly so creepy, Roose closed his browser and had a hard time sleeping afterward.

So, what happened in that interaction?

I’m guessing that the Bing AI, whose massive training data likely included romance novels and other artifacts filled with relationship tropes, generated the exchange to simulate the typical conversation that would emerge if you fell in love with a married person. In other words, this was likely just an imitation of a common human situation — not authentic pleas from a love-starved AI. Still, the impact on Roose was significant, demonstrating that conversational media can be far more impactful than traditional media.  And like all forms of media to date, from books to tweets, conversational AI systems are very likely to be used as tools of targeted persuasion. 

And it won’t just be through text chat. While current conversational systems like ChatGPT and LaMDA are text-based, this soon will shift to real-time voice, enabling natural spoken interactions that will be even more impactful. The technology also will be combined with photorealistic digital faces that look, move, and express like real people. This will enable the deployment of realistic virtual spokespeople that are so human, they could be extremely effective at convincing users to buy particular products, believe particular pieces of misinformation, or even reveal bank accounts or other sensitive material.

Personalized manipulation

If you don’t think you’ll be influenced, you’re wrong. Marketing works. (Why do you think companies spend so much money on ads?) These AI-driven systems will become very skilled at achieving their persuasive goals. After all, the Big Tech platforms that deploy these conversational agents likely will have access to extensive personal data (your interests, hobbies, values, and background) and could use this information to craft interactive dialogue that is specifically designed to influence you personally.

Smarter faster: the Big Think newsletter
Subscribe for counterintuitive, surprising, and impactful stories delivered to your inbox every Thursday

In addition, these systems will be enabled to analyze your emotional reactions in real-time, using your webcam to process your facial expressions, eye motions, and pupil dilation — all of which can be used to infer your feelings at every moment. This means that a virtual spokesperson that engages you in an influence-driven conversation will be able to adapt its tactics based on how you react to every point it makes, detecting which strategies are working and which aren’t.

You could argue this is not a new risk, as human salespeople already do the same thing, reading emotions and adjusting tactics, but consider this: AI systems can already detect reactions that no human can perceive. For example, AI systems can detect “micro-expressions” on your face and in your voice that are too subtle for human observers but which reflect inner feelings. Similarly, AI systems can read faint changes in your complexion known as “facial blood flow patterns” and tiny changes in your pupil size, both of which reflect emotional reactions. Virtual spokespeople will be far more perceptive of our inner feelings than any human.

Conversational AI also will learn to push your buttons. These platforms will store data about your interactions during each conversational engagement, tracking over time which types of arguments and approaches are most effective on you personally. For example, the system will learn if you are more easily swayed by factual data or emotional appeals, by tugging on your insecurities or dangling potential rewards. In other words, these systems not only will adapt to your real-time emotions, they will get better and better at “playing you” over time, learning how to draw you into conversations, how to guide you to accept new ideas, how to get you riled up or pissed off, and ultimately how to convince you to buy things you don’t need, believe things that are untrue, or even support policies and politicians that you would normally reject. And because conversational AI will be both individualized and easily deployed at scale, these person-by-person methods can be used to influence broad populations.

You could argue that conversational AI will never be as clever as human salespeople or politicians or charismatic demagogues in their ability to persuade us. This underestimates the power of artificial intelligence. It is very likely that AI systems will be trained on sales tactics, psychology, and other forms of persuasion. In addition, recent research shows that AI technologies can be strategic. In 2022, DeepMind used a system called DeepNash to demonstrate for the first time that an AI could learn to bluff human players in games of strategy, sacrificing game pieces for the sake of a long-term win. From that perspective, a typical consumer could be extremely vulnerable when faced with an AI-powered conversational agent designed for strategic persuasion.

This is why the AI manipulation problem is a serious concern. Instead of firing buckshot into polarized groups like current influence campaigns, these new methods will function more like “heat seeking missiles,” targeting us as individuals and adapting their tactics in real-time, adjusting to each user personally as it works to maximize the persuasive impact.  


Related

Up Next