Skip to content
The Future

Technology expert tells us why the AI “doomer” narrative is all wrong

When ancient humans stared into the darkness, they imagined monsters. Today, staring into the future, AI is the monster.
A man in a black suit and white shirt is smiling, unaffected by the doomer mindset.
Pictured: Alex Kantrowitz, the founder of Big Technology newsletter and podcast.
Craig Barritt / Getty Images for Unfinished Live
Key Takeaways
  • A lot of the narrative around AI is divided into two camps: “doomers” and "utopians." To clear things up, Big Think reached out to Alex Kantrowitz, a tech expert who has been watching AI for years.
  • Kantrowitz believes “doomerism” is born of our misplaced and exaggerated human propensity for fear. Fear sells, and doomer headlines are popular.
  • Kantrowitz asks us not to get carried away with the hype. AI is useful and transformative, but it is nowhere close to "general intelligence."

People are scared of AI. At the most plausible and reasonable level, people are scared AI will either take their job or make their work far less valuable. We know, for example, that freelance copywriters and graphic designers are being offered fewer jobs and are paid significantly less when they get one. On the other, more speculative end, people are losing sleep over the imminent destruction of mankind. It’s not job security they fear, but Terminator robots.

When these fears are amplified and bounced around inside echo chambers, they morph into something known as “doomerism.” Doomerism is the pessimistic, apocalyptic-leaning, evil twin of techno-optimism. The two schools agree that AI will accelerate exponentially, probably beyond our ability to predict or control. They disagree about whether that’s a good thing or not.

To make sense of this, Big Think sat down with Alex Kantrowitz, a technology expert with a keen eye on Silicon Valley. He has interviewed the likes of Mark Zuckerberg and Larry Ellison and is the founder of the Big Technology newsletter and podcast. Kantrowitz spends every day studying AI — and he hasn’t got much patience for AI doomerism.

Fear sells and moves

The 17th-century philosopher Thomas Hobbes believed that fear is the primary motivator of human behavior. We can be greedy, lustful, power-hungry, and loving, but all these play second fiddle to fear. We are biologically primed to respond to fear more than any other passion. Journalists and politicians have known this for a long time. Aristotle knew that if you wanted to whip up a crowd, you should appeal to their sense of fear. Wartime propaganda almost always exaggerates the threat of the enemy.

It’s this propensity to obsess over fear that is, according to Kantrowitz, motivating the doomer narrative. As he told Big Think, “I think it’s very simple. Fear sells, and we are afraid of the unknown. The message of fear will spread much further when applied to an unknown technology.”

This unknown aspect is important. Kantrowitz points out that when we deal with the unknown — when we have no real answers — the human mind inclines to Hobbesian fear. When ancient humans stared into an impenetrably dark wood, they imagined monsters. When modern humans stare into an equally dark future, we still imagine monsters.

If you know that fear motivates people to action, you can weaponize and manipulate it to achieve whatever you want. Who, then, is weaponizing the doomer narrative around AI? For Kantrowitz, it’s the big tech companies — the ones who have most to lose in an AI world.

“It is not a conspiracy theory to ask, ‘Who benefits from broader fears about AI destroying the world?’” Bigger companies will endorse legislation and restrictions that will disproportionately hamstring the smaller ones — that is, companies like Facebook and Google “who have the compliance departments to ensure that they can follow those new rules [they helped draft] and continue to develop as smaller companies struggle to meet the requirements.”

Reasonable fear

Not all fear is irrational or misplaced. We might be biologically programmed to fear snake-like creatures, but that might be warranted. So, is fear in this case justified? Again, Kantrowitz thinks not. As he tells us, “The more I start speaking with the people who are actually putting this stuff into practice, the clearer it becomes to me that (a) we have little to fear about our jobs being taken by AI, and (b) there’s a very low chance that AI will destroy the world anytime soon.”

We are so caught up in the AI hype, whether it’s doomerism or utopianism, that we might be missing the overall picture. As Kantrowitz puts it, human nature (and media companies) “tend to reward the extreme and forget the nuance.”

How so? First, a lot of the AI that people talk about in pearl-clutching terror has existed — unseen and unacknowledged — for decades. AI has been used in social media algorithms, military systems, spam filters, and GPS long before companies such as OpenAI ever existed. Doomerism only went mainstream with large language models like ChatGPT.

Second, we should take a closer look at what LLMs can actually do — not what we imagine they do or what they might do one day. As Kantrowitz puts it:

“AI cannot really generalize beyond its training data. It won’t be able to come up with new schools of thought. It can surprise and delight, and it definitely looks at the world in fresh ways. [There needs to be] some pushback to some of the more unhinged rumor mongering. They might be able to read your PDF and tell you what’s interesting in it but we’re not giving these things access to the nuclear codes. The most sophisticated human hackers can’t get anywhere close to [weapons of mass destruction]. I don’t think that military systems are vulnerable to the point where an AI or even a hacker plus AI could end up causing damage to them today. You know, it’s just not going to happen.”

Smarter faster: the Big Think newsletter
Subscribe for counterintuitive, surprising, and impactful stories delivered to your inbox every Thursday

For Kantrowitz, AI will not bring in some science fiction misery of Skynets and Agent Smiths. “The probability of us being incinerated by AI robots anytime soon is very, very low.”

The revolution will not be televised

But as our conversation went on, we were able to discuss the more plausible, and perhaps more insidious, examples of AI harm. It’s more of a creeping harm than an apocalyptic one.

First, there’s an interesting phenomenon emerging around our emotional engagement with chatbots. Kantrowitz highlights one problem: “Think about how many people have fallen in love with AI chatbots already. It will scar people if these chatbots are online one day and offline the next.” For a masterful representation of what this kind of world might look like, watch the movie Her.

Second, we don’t know how AI will affect our relationships with each other. When the internet first came about, very few would have predicted TikTok and Snapchat. The ways we interact and socialize have been changed by technology. AI is likely to have the same effect. So, if the future is dystopian, it’s more Black Mirror than Terminator.


Related

Up Next