AI is raising concerns over risks to humanity. REUTERS/Aly Song/File Photo (REUTERS)News 

Are Tech and Political Leaders Taking Adequate Measures to Address the Risks to Humanity Posed by Advanced Artificial Intelligence?

The impressive capabilities of chatbots such as ChatGPT, which can write speeches, organize vacations, and engage in conversations as well as, if not better than, humans, have astounded the world. However, the latest trend in artificial intelligence, known as frontier AI, is causing concerns about its potential to pose risks to humanity.

Everyone from the UK government to top researchers and even the big AI companies themselves are sounding the alarm about the as-yet-unknown dangers of frontier AI and calling for safeguards to protect humans from its existential threats.

The debate will come to a head on Wednesday when British Prime Minister Rishi Sunak hosts a two-day summit focusing on frontier AI issues. About 100 officials from 28 countries are reportedly expected, including US Vice President Kamala Harris, European Commission President Ursula von der Leyen and executives from major US AI companies, including OpenAI, Google’s Deepmind and Anthropic.

The setting is Bletchley Park, a former top-secret base for World War II codebreakers led by Alan Turing. The historic mansion is considered the birthplace of modern computing because it was where Turing and others famously broke the codes of Nazi Germany using the world’s first digital programmable computer.

In a speech last week, Sunak said that only governments – not AI companies – can keep people safe from the risks of technology. However, he also noted that the UK’s approach was “not to rush to regulate”, although he outlined a number of scary-sounding threats, such as the use of artificial intelligence to facilitate chemical or biological weapons.

“We need to take this seriously and start focusing on trying to get in front of the problem,” said Jeff Clune, an assistant professor of computer science at the University of British Columbia who focuses on artificial intelligence and machine learning.

Clune was one of a group of influential researchers who wrote a paper last week calling for governments to do more to manage the risks posed by artificial intelligence. It’s the latest in a series of dire warnings from tech moguls like Elon Musk and OpenAI CEO Sam Altman about the rapidly evolving technology and the different ways industry, political leaders and researchers see the way forward when it comes to risk management and regulation. .

It’s far from certain that AI will destroy humanity, Clune said, “but there’s enough risk and possibility that it will happen. And we need to mobilize society’s attention to try to solve it now, rather than waiting for the worst-case scenario to happen.

One of Sunak’s big goals is to reach a consensus on the nature of artificial intelligence risks. He also reveals plans to create an AI Security Institute to evaluate and test new types of technologies, and proposes the creation of a global panel of experts inspired by the UN’s Climate Change Panel to understand AI and produce a “State of AI Science” report. .

The summit reflects the British government’s eagerness to host international gatherings to show it is not isolated and can still lead the world stage after leaving the European Union three years ago.

The UK also wants to make its case on a hot-button political issue where both the US and the 27-nation EU are taking action.

Brussels is currently finalizing the world’s first comprehensive AI regulations, while US President Joe Biden on Monday signed a sweeping executive order to guide AI development based on voluntary commitments made by tech companies.

China, which along with the United States is one of the world’s two AI powers, has been invited to the summit, although Sunak could not say with “100% certainty” that representatives from Beijing would attend.

The paper, signed by Clune and more than 20 other experts, two of whom have been called the “godfathers” of AI – Geoffrey Hinton and Yoshua Bengio – called on governments and AI companies to take concrete steps, such as spending a third of their research and development. resources to ensure the safe and ethical use of advanced autonomous artificial intelligence.

Frontier AI stands for the latest and most powerful systems that go to the edge of AI capabilities. They are based on basic models, which are algorithms trained on a wide variety of data collected from the Internet to provide a general, but not infallible, knowledge base.

That makes cross-border AI systems “dangerous because they’re not fully there,” Clune said. “People assume and think they’re very ignorant, and that can get you in trouble.”

However, the meeting has faced criticism that it is too concerned with distant dangers.

“The focus of the summit is actually a bit too narrow,” said Francine Bennett, interim director of the Ada Lovelace Institute, a policy research group focused on artificial intelligence in London.

“We’re just in danger of forgetting about the broader set of risks and security” and algorithms that are already part of everyday life, he said at a Chatham House panel last week.

Deb Raji, a researcher at the University of California at Berkeley who has studied algorithmic bias, pointed to problems with systems already in use in the UK, such as police facial recognition systems that had a much higher false-detection rate for black people, and an algorithm that failed. high school exam.

The summit is a “missed opportunity” and marginalizes the communities and workers most affected by AI, more than 100 civil society groups and experts said in an open letter to Sunak.

Skeptics say the UK government has set its summit targets too low because AI regulation is not on the agenda, focusing instead on creating “safeguards”.

Sunak’s call for no rush to regulation echoes “messages we’re hearing from many business representatives in the United States,” Raji said. “And so I’m not surprised that it also comes down to what they might say to the UK authorities.”

Tech companies should not be involved in drafting regulations because they tend to “underestimate or downplay” the urgency and any harm, Raji said. They’re also less open to supporting proposed laws “that may be necessary but could effectively jeopardize their bottom line,” he said.

DeepMind and OpenAI did not respond to requests for comment. Anthropic said founders Dario Amodei and Jack Clark would participate.

In a blog post, Microsoft said it looked forward to “the UK’s next steps in convening the summit, advancing its efforts in AI safety testing and supporting wider international cooperation on AI governance”.

The government insists that it has the right mix of participants from government, academia, civil society and business.

The Institute for Public Policy Research, a centre-left think tank in Britain, said it would be a “historic mistake” to leave the tech industry to regulate itself without government oversight.

“Regulators and the public are largely in the dark about how AI will be implemented across the economy,” said Carsten Jung, senior economist at the group. “But self-regulation didn’t work in social media companies, it didn’t work in finance, and it doesn’t work in artificial intelligence.”

Related posts

Leave a Comment