Mitigating it should be a 'global priority.'AI 

Industry leaders say artificial intelligence has an “extinction risk” equal to nuclear war

With ChatGPT, Bard, and other large language models (LLMs), we’ve heard warnings from concerned people like Elon Musk about the dangers of artificial intelligence (AI). Now, a group of prominent industry leaders has issued a one-sentence statement that effectively reinforces those concerns.

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

It was published by the Center for Artificial Intelligence Security, an organization whose mission is to “reduce the societal risks posed by artificial intelligence,” on its website. The signatories are people in the field of artificial intelligence, including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis. Turing award-winning researchers Geoffrey Hinton and Joshua Bengio, considered by many to be the godfathers of modern artificial intelligence, also gave their names.

This is the second such statement in the past few months. In March, Musk, Steve Wozniak and more than 1,000 others called for a six-month pause in artificial intelligence so that industry and the public could effectively catch up with the technology. “In recent months, AI labs have engaged in an uncontrolled race to develop and deploy more powerful digital brains that no one — not even their creators — can reliably understand, predict or control,” the letter says.

While AI is (probably) not as self-aware as some have feared, it carries the risk of abuse and harm through deep fakes, automated disinformation, and more. An LLM can also change the way content, art and literature are produced, which can affect many jobs.

US President Joe Biden recently stated that it “remains to be seen” whether artificial intelligence is dangerous, adding that “I think tech companies have an obligation to make sure their products are safe before they go public… AI can help tackle some very difficult challenges like disease and climate change , but we must also address potential risks to our society, economy and national security.” At a recent White House meeting, Altman called for regulation of artificial intelligence due to potential risks.

With so many opinions on the table, the new, abbreviated statement is intended to show a shared concern about the risks of AI, even if both sides disagree on what those risks are.

“Many important and urgent AI risks are increasingly being discussed by AI experts, journalists, policymakers and the public,” the statement’s introduction reads. However, it can be difficult to express concern about some of the advanced risks of artificial intelligence. The brief statement below is aimed at overcoming this barrier and open discussion. It also aims to create common knowledge of the growing number of experts and public figures who also take some of the most significant risks. The dangerousness of artificial intelligence seriously.”

Related posts

Leave a Comment