Elon Musk and OpenAI's Sam Altman have signed open letters warning that AI could make humanity extinct. (Pexels)News 

Controversy Surrounding the Impact of a Potentially Hazardous Ideology on Artificial Intelligence Discussions

Longtermism, the favored philosophy of Silicon Valley, has played a significant role in shaping discussions about artificial intelligence by focusing on the potential threat of human extinction.

But increasingly vocal critics warn that the philosophy is dangerous, and the obsession with extinction distracts from real problems with AI, such as data theft and biased algorithms.

Author Emile Torres, a former long-suffering and critic of the movement, told AFP the philosophy was based on principles used in the past to justify mass killings and genocide.

Still, the movement and related ideologies, such as transhumanism and effective altruism, are having a huge impact in universities from Oxford to Stanford and across the tech sector.

Venture capitalists like Peter Thiel and Marc Andreessen have invested in life extension companies and other pet projects related to the movement.

Elon Musk and OpenAI’s Sam Altman have signed open letters warning that artificial intelligence could make humanity extinct — even as they profit by claiming that only their products can save us.

Ultimately, critics say, this fringe movement has far too much influence on public debates about the future of humanity.

– “Very dangerous” –

Long-term researchers believe that we have an obligation to try to produce the best results for the greatest number of people.

This is no different than the 19th century liberals, but the long-shots have a much longer timeline in mind.

They look far into the future and see trillions and trillions of people floating through space and setting up new worlds.

They argue that we owe the same duty to each of these future people as we do to anyone alive today.

And because there are so many of them, they weigh much more than today’s individuals.

That kind of thinking makes the ideology “really dangerous,” said Torres, author of “Human Extinction: A History of the Science and Ethics of Annihilation.”

“Whenever you have a utopian vision of the future, characterized by almost infinite values, and you combine that with a kind of utilitarian moral thinking where the end can justify the means, that’s going to be dangerous,” Torres said.

If a super-intelligent machine could come to life and potentially destroy humanity, the long-term experts would be forced to oppose it, regardless of the consequences.

When a user of Twitter, the platform now known as X, asked in March how many people would have to die to prevent this from happening, longtime ideologist Eliezer Yudkowsky replied that people were only needed to “create a viable reproductive population.”

“As long as it’s true, there’s still a chance to make it to the stars one day,” he wrote, though he later deleted the post.

– Eugenics claims –

The long-term operation grew out of Swedish philosopher Nick Bostrom’s work in the 1990s and 2000s on existential risk and transhumanism—the idea that technology can complement humans.

Academician Timnit Gebru has pointed out that transhumanism was connected to eugenics from the beginning.

British biologist Julian Huxley, who coined the term transhumanism, was also president of the British Eugenics Society in the 1950s and 1960s.

“Longtermism is eugenics by a different name,” Gebru wrote in X last year.

Bostrom has long been accused of supporting eugenics after listing as an existential risk “dysgenic pressures,” essentially less intelligent people reproducing faster than their intelligent peers.

The philosopher, who heads the Future of Life Institute at Oxford University, apologized in January after admitting to writing racist messages on an internet forum in the 1990s.

“Do I support eugenics? No, not as the term is commonly understood,” he wrote in his apology, noting that it has been used to justify “some of the most horrific atrocities of the last century.”

– “More Sensational” –

Despite these problems, long-term scholars like Yudkowsky, a high school dropout known for writing Harry Potter fan fiction and promoting polyamory, continue to be nurtured.

Altman has credited him with getting funding for OpenAI and suggested in February that he deserves a Nobel Peace Prize.

But Gebru, Torres and many others try to focus on the harms, such as the theft of artists’ work, bias and the concentration of wealth in the hands of a few corporations.

Torres, who uses the pronoun they, said that while there were true believers like Yudkowsky, most of the talk about extinction was about winning.

“Talking about human extinction, a real apocalypse where everyone dies, is far more sensational and fascinating than paying Kenyan workers $1.32 an hour or exploiting artists and writers,” they said.

Related posts

Leave a Comment