No Immediate Prospect of Global Agreement at AI Summit
Despite hosting the inaugural AI safety summit, British Prime Minister Rishi Sunak’s efforts to secure significant agreements have not yet resulted in a comprehensive global plan for regulating artificial intelligence technology.
Over the course of two days, technology leaders such as Elon Musk and OpenAI’s Sam Altman rubbed shoulders with the likes of U.S. Vice President Kamala Harris and European Commission Director General Ursula von der Leyen to discuss the future of AI regulation. .
We are now on WhatsApp. Click to join.
Leaders from 28 countries – including China – signed the Bletchley Declaration, a joint statement acknowledging the risks of the technology. The US and Britain both announced plans to establish their own AI security institutes; and two more summits were announced next year in South Korea and France.
While consensus has been reached on the need for AI regulation, disagreements persist over how it should happen – and who will lead such efforts.
The risks associated with rapidly developing artificial intelligence have been an increasingly high priority for policymakers since the Microsoft-backed Open AI released ChatGPT to the public last year.
The chatbot’s unprecedented ability to respond to prompts with human-like fluidity has led some experts to call for a halt to the development of such systems, warning that they could gain autonomy and threaten humanity.
Sunak said he was “privileged and excited” to host Tesla founder Musk, but European lawmakers warned of too much technology and data being held by a small number of companies in one country, the United States.
“One single country with all the technologies, all the private companies, all the equipment and all the skills is a failure for all of us,” French Economy and Finance Minister Bruno Le Maire told reporters.
The UK has also left the EU by proposing a light-hearted approach to AI regulation, in contrast to the soon-to-be-finalised European AI Act, which binds developers of “high-risk” apps to stricter controls.
“I came here to sell our AI law,” Vera Jourova, Vice President of the European Commission.
Jourova said while she did not expect other countries to copy the bloc’s laws wholesale, some form of agreement on global rules was required.
“If the democratic world is not the rule makers and we become the rule takers, the battle will be lost,” he said.
Projecting an image of unity, participants said
three major power groups present – USA, EU,
and China – tried to defend their dominant position.
Some suggested that Harris had elevated Sunak when the US government announced its own AI security institute – just as the UK had done a week earlier – and he gave a speech in London that highlighted the short-term risks of the technology, as opposed to the summit’s focus on existential threats.
“It was fascinating that just as we announced our AI Security Institute, the Americans announced theirs,” said attendee Nigel Toon, CEO of British AI company Graphcore.
China’s presence at the summit and its decision to sign the “Bletchley Declaration” was a success for the British authorities.
China’s Vice Minister of Science and Technology said the country is ready to work with all parties to manage artificial intelligence.
However, showing the tension between China and the West, Wu Zhaohui told delegates: “Countries regardless of size and scale have equal rights to develop and use artificial intelligence.”
The Chinese minister participated in a ministerial round table on Thursday, his ministry said. However, he did not participate in public events on the second day.
A recurring theme in the closed-door discussions, highlighted by several attendees, was the potential risks of open-source AI, which gives the public free access to experiment with the code behind the technology.
Some experts have warned that terrorists could use open-source designs to create chemical weapons or even superintelligence beyond human control.
Speaking with Sunak at a live event in London on Thursday, Musk said: “It’s going to get to the point where you’ve got open source AI that’s starting to approach or maybe surpass human-level intelligence. I don’t really know what to do about it.”
Yoshua Bengio, an AI pioneer appointed to lead a “state of the science” report commissioned as part of the Bletchley declaration, told Reuters that the risks of open-source AI are primary.
He said: “It can be put into the hands of bad actors, and it can be modified for malicious purposes. You can’t get an open source release of these powerful systems and still protect the public with the right guardrails.”