Skip to content
The Future

Evil twins and digital elves: How the metaverse will create new forms of fraud and deception

The metaverse may leave us perpetually unsure whether the people we encounter are authentic or high-quality fakes.
Credit: nuclear_lily / Adobe Stock
Key Takeaways
  • The metaverse is likely to usher in a new era of fraud and deception, fueled by increasingly sophisticated immersive technologies and AI.
  • One form of future fraud may be “evil” digital twins — accurate virtual replicas that look, sound, and act like you (or people you know and trust), but are actually controlled by fraudsters.
  • Another form could be digital elves — virtual assistants that subtly nudge us toward sponsored content or propaganda. 

We humans are obsessed with technologies that blur the boundaries between what is real and what is fabricated. In fact, two of the hottest fields right now are defined by how effectively they can deceive us: the metaverse and artificial intelligence.

When it comes to the metaverse, the goal of VR and AR technology is to fool the senses, making computer-generated content seem like real-world experiences. On the AI front, Alan Turing famously threw down the gauntlet, stating that the ultimate test of a human-level AI would be to successfully fool us into believing that it was human. 

Whether you’re looking forward to these technologies or not, their power of deception will soon transform society. As I write this, tens of billions are being invested to develop virtual worlds that achieve “suspension of disbelief,” while additional investments are working to populate those virtual worlds with AI-driven avatars that look, sound, and act so real that we won’t be able to tell the difference between actual people and virtual people (or “veeple,” as I call them).

I know what you’re thinking — you will be able to tell the difference. 

Well, not according to a startling study published recently by researchers at Lancaster University and UC Berkeley. Using a sophisticated form of AI known as a GAN (generative adversarial network), they created artificial human faces (i.e. photorealistic fakes) and presented those fakes to hundreds of human subjects, along with a mix of real faces. They found that AI has become so effective, we humans can no longer tell the difference between real and virtual faces. And that wasn’t their most frightening finding. 

The researchers also asked their test subjects to rate the “trustworthiness” of each person and discovered that we humans find AI generated faces to be significantly more trustworthy. As I described in a recent academic paper, this result makes it extremely likely that advertisers will extensively use AI generated people in place of human actors and models, especially within the metaverse. That’s because working with virtual people will be cheaper and faster, and, if they’re perceived as more trustworthy, they’ll be more persuasive too. 

The risks of ceding power to corporations

It’s not the technology of the metaverse I fear — it’s the extreme power these technologies will give large corporations. The companies that control metaverse platforms will be able to monitor users at levels we have never seen before, tracking every aspect of your virtual life: where you go, what you do, who you’re with, what you say, and what you look at. They will even monitor your facial expressions and vocal inflections to assess your emotional state as you react to the world around you. 

And don’t be fooled into thinking you won’t spend much time in a virtual world. You will. That’s because augmented reality will splash virtual content all around us. As soon as Apple, Meta, Microsoft, Google and others launch their stylish consumer-focused AR glasses, the eyewear will become required equipment for our digital lives. This transition will happen as quickly as the shift from flip phones to smartphones. After all, without AR glasses you won’t be able to access the wealth of magical content that will soon fill our world.  

And once big tech has us inside their metaverse platforms, they will use all the tools at their disposal to drive profits. This means targeting consumers with AI-driven virtual people that engage us in promotional conversation. I know this sounds creepy, but these conversational agents will target us with extreme precision, monitoring our emotions in real-time so they can adapt their promotional strategy (i.e. sales pitch) to maximize persuasion. Yes, this will be a gold mine for predatory advertising and that’s a legitimate use of this technology.  

What about the fraudulent uses of virtual people?  

This brings me to identity theft in the metaverse. In a recent Microsoft blog post, Executive Vice President Charlie Bell states that fraud and phishing attacks in the metaverse could “come from a familiar face – literally – like an avatar that impersonates your coworker.” I strongly agree. In fact, I worry that the ability to hijack or emulate avatars could destabilize our sense of identity, leaving us perpetually unsure whether the people we encounter are authentic or quality fakes.  

And this type of digital impersonation does not require advances in AI technology. I say that because an imposter avatar could be controlled by a human fraudster hidden behind a virtual façade. This eliminates the need for AI automation. Instead, AI need only be used for real time voice-changing to allow the criminal to mimic the sound of a friend, coworker, or other trusted figure. These technologies already exist and are rapidly improving.    

Evil twins and digital elves

Accurately replicating the look and sound of a person in the metaverse is often referred to as creating a “digital twin.” Just last month, Jensen Haung, the CEO of NVIDIA, gave a keynote address using a cartoonish digital twin. He stated that the fidelity will rapidly advance in the coming years, as will the ability for AI engines to autonomously control your twin so that you can be in multiple places at once.  Yes, digital twins are coming. 

This is why we need to prepare for what I call “evil twins”: accurate virtual replicas that look, sound, and act like you (or people you know and trust) and which are used for fraudulent purposes. This form of identity theft will happen in the metaverse, as it is a straightforward amalgamation of current technologies developed for deep-fakes, voice emulation, digital-twinning, and AI driven avatars. 

This means platform providers need to develop equally powerful authentication technologies that will allow us to instantly determine whether we’re interacting with the person we expect (or their authorized digital twin) and NOT an evil twin that was fraudulently deployed to deceive us. If platforms do not address this issue early on, the metaverse could collapse under an avalanche of deception. 

But of all the technologies headed our way, it’s the ones we eagerly adopt that have the greatest impact, both positive and negative. This brings me to what I fear will be the most powerful form of coercion in the metaverse: the electronic life facilitator, or ELF. These AI-driven avatars will be the natural evolution of digital assistants like Siri and Alexa as they evolve from today’s disembodied voices to tomorrow’s personified digital beings. 

Big tech will market these AI agents as virtual life coaches that are persistent throughout your day as you navigate the metaverse. And because augmented reality will be our primary gateway to virtual content, these digital elves will travel with you everywhere, whether you are shopping, working, walking down the street, or just hanging out. And if the platform providers achieve their goal, you will come to think of these virtual beings as trusted figures in your life — a mix between a familiar friend, a helpful advisor, and a caring therapist. 

Yes, this sounds creepy, which is why big tech will likely make them cute and nonthreatening, with innocent features and mannerisms that seem more like a magical character than a human-sized assistant following you around. This is why I find the word “elf” so fitting, as they may appear to you as an adorable fairy, sprite, or gremlin hovering over your shoulder — a small anthropomorphic character that can whisper in your ear or fly out in front of you to draw your attention to items in your virtual or augmented world. 

Smarter faster: the Big Think newsletter
Subscribe for counterintuitive, surprising, and impactful stories delivered to your inbox every Thursday

If we don’t push for metaverse regulation, these life facilitators could become the most effective tools of persuasion ever developed, subtly guiding us towards sponsored content from products and services to political messaging and propaganda, and doing it all with cute smiles and giggly laughs. I know this sounds futuristic, but it’s not far off. Based on the current state of the technology, digital elves and evil twins could be commonplace in our virtual lives by 2030

metaverse fraud
By the early 2030s the Digital Elf could be a common feature of our augmented lives.

Ultimately, the technologies of VR, AR, and AI have the potential to enrich society. But when used in combination, these innovations become especially dangerous, as they all have one powerful trait in common: They blur the boundaries between the factual and the fraudulent. It’s this ability for digital deception that requires a real focus on security and regulation. Without such safeguards, consumers in the metaverse will be extremely vulnerable to fraudsters, identity thieves, and predatory advertising tactics. The time to prepare is now.

In this article

Related

Up Next