Skip to content
The Future

The Singularity: When will we all become super-humans?

Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?
singularity
Credit: Ruslan Solntsev / Adobe Stock
Key Takeaways
  • Futurologists point to the exponential rate of technological progress and conclude that we are rapidly approaching a revolutionary turning point.
  • Known as “The Singularity,” they predict that we will be able to enhance human intelligence and magnify creativity, entering a new evolutionary stage for mankind and a new epoch for the cosmos.
  • There are at least three objections to this view, demonstrating that the Singularity is hardly a foregone conclusion.
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

In 1903, the Wright brothers showed the world the first sustained flight. In less than 60 years, Yuri Gagarin became the first person in space and orbited the Earth.

In 1993, Tim Berners-Lee made public the source code for the “World Wide Web.” Thirty years later, everything from our fridges to our watches are plugged in.

In 1953, Rosalind Franklin, James Watson, and Francis Crick discovered the double-helix of DNA. Within 50 years, we mapped the human genome. Twenty years later, we are using CRISPR to edit DNA.

In 1992, Gary Kasparov laughed at how embarrassing his computer chess opponent was. Within five years, he was beaten by one.

Technology has a habit of running away from us. When a breakthrough occurs or a floodgate opens, explosive, exponential growth often follows. And, according to futurologist Ray Kurzweil, we are only an historical moment away from “The Singularity.”

This weak and mortal body

The Singularity, for Kurzweil, is defined as “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.” The idea is that discovery and progress will “explode with unexpected fury.” We often fail to appreciate what “exponential growth” actually means and how rapidly it brings about change. For instance, if we were to double the processing power of a computer every year, within seven of these “doublings,” our computers’ power would have increased 128-fold.

There are more innovators and scientists today, and they have more efficient tools and methods. The conclusion that Kurzweil draws is that technological advancement is “now doubling every decade” (though he fails to cite a source for that). According to him, we are only a few decades from the point when things really take off — when we enter a breathtakingly abrupt, and completely transformed, new world.

For some, this Singularity will be a utopia. For others, it will be a Terminator-style nightmare. Kurzweil is certainly of the former. Kurzweil sees the weakness in our human frailty, or what he calls “1.0 biological bodies.” Yes, we have Rembrandt, Newton, and Saint-Saëns, but it is also true that “much human thought is derivative, petty, and circumscribed.” Which is why the Singularity cannot come fast enough. It is time to ditch these lumbering flesh-sacs of violent barbarity.

The next epoch

Kurzweil sees the universe in terms of six great “epochs.” They begin with physics and chemistry in creating the universe. Then, carbon-based compounds became more and more intricate, until life emerged. Eventually, intelligence evolved, as did the human brain, which then allowed us to create greater and greater technology.

And so, we arrive at “our” epochal moment. The next great leap for the universe will be when humans and technology merge. This does not mean using Google Maps to find your way home; it means that our very biology will become enmeshed with the technology we create. It is the age of bionics. As such, the machines we make will allow us to “transcend the human brain’s limitations of a mere hundred trillion extremely slow connections” and overcome “age-old human problems and vastly amplify creativity.” It will be a transcendent, next-stage humanity with silicon in our brains and titanium in our bodies.

Whether this means an evil, god-like elite enslaving us all or some omni-pleasant idyll, Kurzweil is (uncharacteristically) unsure.

Cold water on a circuit board

How likely is all this? What cold water might there be to throw on it?

The first idea to challenge is how likely it is that technology will progress in a way that will lead to either general artificial intelligence or sophisticated bionic enhancements to our own minds. Most of Kurzweil’s estimates (as well as those of other futurologists like Eliezer Yudkowsky) are built on previous and existing hardware developments. But, as philosopher David Chalmers argues, “The biggest bottleneck on the path to AI is software, not hardware.” Having a mind, or general human intelligence, involves all manner of complicated (and unknown) neuroscientific and philosophical questions, so “hardware extrapolation is not a good guide here.” Having a mind is a different kind of step altogether; it is not like doubling flash drive memory size.

Second, there is no necessary reason that there will be exponential growth of the kind futurologists depend on. Past technological advances do not guarantee similar future advances. There is also the law of “diminishing returns.” It could be that even though we have more collective intelligence working more efficiently, we still get less out of it. Apple, today, is the richest company in the world with the finest minds in computer science working for them. Yet, it is plainly obvious that the most recent iDevices seem less exciting or innovative than their previous renditions.

Kurzweil and his supporters may well reply that a world of “enhanced intelligence” — in which we might see a 20 percent increase in intelligence –  is surely outside the remit of “diminishing returns.” As Chalmers points out, “Even among humans, relatively small differences in design capacities (say, the difference between Turing and an average human) seem to lead to large differences in the systems that are designed.” There might be a cap or diminishing return to what existing human intelligence can achieve, but what about when we can enhance this?

A third objection is that there are a lot of situational or event-type obstacles that can conceivably get in the way of the Singularity. It might be that there is a terrible, slate-wiping global war. Or another pandemic might wipe most of us out. Maybe nanotechnology turns our brains to mush. Perhaps AI wreaks terrible disasters on the world. Or maybe we simply run out of the resources required to build and develop technology. Taken alone, each of these might pose trifling chances, but when you stack up all the possible dead ends and setbacks, it is enough to question how foregone a conclusion the Singularity really is.

A sci-fi lover’s dream

How you view Kurzweil will depend largely on your existing biases — and perhaps how much science fiction you have read. It is certainly true to say that technology in the last century has increased at a rate far beyond that of past centuries and millennia. The world of the 2020s is unrecognizable compared to that of the 1920s. Our great-great-grandfathers would look at the world today as they would an H.G. Wells novel.

But, it is equally true that there are many obstacles in the way of unlimited technological progress. We ultimately do not know if this rocket is going to take off — or if it does, whether it will hit a very hard glass ceiling.

Jonny Thomson teaches philosophy in Oxford. He runs a popular Instagram account called Mini Philosophy (@philosophyminis). His first book is Mini Philosophy: A Small Book of Big Ideas.

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Related

Up Next