in ,

5 Scientific Experts Who Think A.I. Will Destroy the Human Species

Out of the potential doomsday scenarios that could wipe out the human race, super smart artificial intelligence has ranked pretty highly for decades.

In scientific circles, a growing number of artificial intelligence experts are increasingly worried that humans will eventually create machines that will be able to think beyond their own capacities.

Sometimes referred to as “the singularity”, this moment may create a utopia where robots take over common forms of labor, and humans can relax in a world of bountiful resources.

Or, this moment may result in artificial intelligence exterminating humans in a competition for resources and ultimate control of the Earth.

Here are 5 experts who believe the latter scenario is more likely, as originally published in Time.

Stephen Hawking

“The development of full artificial intelligence could spell the end of the human race,” the world-renowned physicist told the BBC. “It would take off on its own and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” Hawking has been voicing this apocalyptic vision for a while. In a May column in response to Transcendence, the sci-fi movie about the singularity starring Johnny Depp, Hawking criticized researchers for not doing more to protect humans from the risks of AI. “If a superior alien civilisation sent us a message saying, ‘We’ll arrive in a few decades,’ would we just reply, ‘OK, call us when you get here—we’ll leave the lights on’? Probably not—but this is more or less what is happening with AI,” he wrote.

Elon Musk

Known for his businesses on the cutting edge of tech, such as Tesla and SpaceX, Musk is no fan of AI. At a conference at MIT in October, Musk likened improving artificial intelligence to “summoning the demon” and called it the human race’s biggest existential threat. He’s also tweeted that AI could be more dangerous than nuclear weapons. Musk called for the establishment of national or international regulations on the development of AI.

Nick Bostrom

The Swedish philosopher is the director of the Future of Humanity Institute at the University of Oxford, where he’s spent a lot of time thinking about the potential outcomes of the singularity. In his new book Superintelligence, Bostrom argues that once machines surpass human intellect, they could mobilize and decide to eradicate humans extremely quickly using any number of strategies (deploying unseen pathogens, recruiting humans to their side or simple brute force). The world of the future would become ever more technologically advanced and complex, but we wouldn’t be around to see it. “A society of economic miracles and technological awesomeness, with nobody there to benefit,” he writes. “A Disneyland without children.”

James Barrat

Barrat is a writer and documentarian who interviewed many AI researchers and philosophers for his new book, “Our Final Invention: Artificial Intelligence and the End of the Human Era.” He argues that intelligent beings are innately driven toward gathering resources and achieving goals, which would inevitably put a super-smart AI in competition with humans, the greatest resource hogs Earth has ever known. That means even a machine that was just supposed to play chess or fulfill other simple functions might get other ideas if it was smart enough. “Without meticulous, countervailing instructions, a self-aware, self-improving, goal-seeking system will go to lengths we’d deem ridiculous to fulfill its goals,” he writes in the book.

Vernor Vinge

A mathematician and fiction writer, Vinge is thought to have coined the term “the singularity” to describe the inflection point when machines outsmart humans. He views the singularity as an inevitability, even if international rules emerge controlling the development of AI. “The competitive advantage—economic, military, even artistic—of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assures that someone else will get them first,” he wrote in a 1993 essay. As for what happens when we hit the singularity? “The physical extinction of the human race is one possibility,” he writes.


The Art of Mindfulness: A Practical Guide to Living in the Moment (eBook)

Do you have an overactive mind?

Do you suffer from anxiety, fear or stress?

In The Art of Mindfulness: A Practical Guide to Living in the Moment, we detail the scientific benefits to practicing mindfulness, and teach you some incredibly effective techniques to be mindful throughout the day.

This practical, evidence-based guide will help you live a more mindful life today. Check out our new eBook here.

When the Singularity Arrives, Will A.I. Fear Its Own Death?

Spiritual Master Osho Reveals the 4 Key Steps to Mindfully Deal With Difficult Emotions