Philosopher Nick Bostrom has written about an "intelligence explosion" he says will happen when superintelligent machines begin designing machines of their own. (Pexels)News 

Exploring the Possibility of Artificial Intelligence Threatening Human Existence

Warnings are being issued from various sources, emphasizing the urgent need to restrain artificial intelligence as it poses a grave threat to humanity’s existence. Immediate action is necessary to prevent potential consequences.

But what are these disaster scenarios and how are the machines meant to wipe out humanity?

– Doom paper clips –

Most disaster scenarios start from the same place: machines exceed human capacity, escape human control, and refuse to shut them down.

“When we have machines with a self-preservation goal, we’re in trouble,” AI academic Yoshua Bengio told an event this month.

But since these machines don’t exist yet, imagining how they might destroy humanity is often left to philosophy and science fiction.

Philosopher Nick Bostrom has written about the “intelligence explosion,” which he says will occur when superintelligent machines start designing their own machines.

He illustrates the idea with a story about a super-intelligent AI in a paper clip factory.

The AI is given the ultimate goal of maximizing the production of paper clips, and so it “proceeds by converting first Earth and then ever-larger chunks of the observable universe into paper clips.”

Many have dismissed Bostrom’s ideas as science fiction, not least because he has specifically claimed that humanity is a computer simulation and supported theories close to eugenics.

He also recently apologized after a racist message he sent in the 1990s came to light.

Still, his ideas about artificial intelligence have been very influential and have inspired both Elon Musk and Professor Stephen Hawking.

-The Terminator-

If super-intelligent machines are going to destroy humanity, they will surely need a physical form.

Arnold Schwarzenegger’s red-eyed cyborg sent from the future by AI to end human resistance in “The Terminator” has proven to be a seductive image, especially for the media.

But experts have rubbished the idea.

“This science fiction is unlikely to become reality for decades to come, if ever,” campaign group Stop Killer Robots wrote in its 2021 report.

However, the group has warned that giving machines the power to make life-and-death decisions is an existential risk.

Robotics expert Kerstin Dautenhahn of the University of Waterloo in Canada played down these fears.

He told AFP that AI is unlikely to give machines better reasoning abilities or inspire them to want to kill all humans.

“Robots are not evil,” he said, although he acknowledged that programmers can make them do bad things.

– Deadlier chemicals –

In a less overt sci-fi scenario, “bad actors” use AI to create toxins or new viruses and release them into the world.

Big language models like GPT-3, which was used to create ChatGPT, are very good at inventing horrible new chemical substances.

A team of scientists using artificial intelligence to discover new drugs conducted an experiment in which they adjusted their artificial intelligence to look for harmful molecules.

They managed to produce 40,000 potentially toxic substances in less than six hours, as reported in the journal Nature Machine Intelligence.

Artificial intelligence expert Joanna Bryson of Berlin’s Hertie School said she could imagine someone finding a way to spread a toxin like anthrax faster.

“But it’s not an existential threat,” he told AFP. “It’s just a horrible, horrible weapon.”

– Type skipped –

Hollywood rules dictate that groundbreaking disasters must be sudden, massive, and dramatic—but what if the end of humanity was slow, silent, and not final?

“In the saddest end, our species may end without a successor,” says philosopher Huw Price in a promotional video for Cambridge University’s Center for the Study of Existential Risk.

But he said there are “less bleak possibilities” where humans, supported by advanced technology, could survive.

“Purely biological species will eventually die out because there are no people around who don’t have access to the technology that enables this,” he said.

The imagined apocalypse is often framed in evolutionary terms.

In 2014, Stephen Hawking argued that ultimately our species would no longer be able to compete with AI, and told the BBC that it could spell the end of humanity.

Geoffrey Hinton, who has spent his career building machines that resemble the human brain, most recently for Google, talks in similar terms about “superintelligence” simply surpassing humans.

He recently told US broadcaster PBS that it was possible that “humanity is just a passing phase in the evolution of intelligence”.

Related posts

Leave a Comment