Vice President and Chief AI Scientist at Meta, Yann LeCun, speaks at the Vivatech show in Paris, France on June 14, 2023. (AP)AI 

Tech Giants Lobby Regulators Over Open-Source vs. Closed AI Futures

While tech leaders have been advocating for the regulation of artificial intelligence, they are also actively lobbying to ensure that the forthcoming regulations are advantageous to their interests.

That doesn’t mean they all want the same thing.

Facebook parent Meta and IBM on Tuesday launched a new group called the AI Alliance, which advocates an “open science” approach to artificial intelligence development, putting them at odds with rivals Google, Microsoft and ChatGPT maker OpenAI.

These two divergent camps—open and closed—disagree on whether AI should be built in a way that makes the underlying technology widely available. Security is at the heart of the conversation, but so is who benefits from the development of artificial intelligence.

Open advocates favor an approach that is not “proprietary and closed,” said Darío Gil, director of research at IBM. “So it’s not like an object that’s locked in a barrel and nobody knows what they are.”

WHAT IS OPEN SOURCE AI?

The term “open source” comes from the decades-old practice of building software where the code is free or widely available for anyone to explore, modify and develop.

Open source AI includes more than just code, and computer scientists differ on how to define it, depending on which components of the technology are publicly available and whether there are restrictions on its use. Some use open science to describe a broader philosophy.

The AI Alliance — led by IBM and Meta and including Dell, Sony, chipmakers AMD and Intel, as well as several universities and AI startups — “comes together to express, simply put, that the future of AI will fundamentally be built on open scientific exchange of ideas and open innovation , including open source and open technology,” Gil said in an interview with The Associated Press before its release.

Part of the confusion about open source AI is that, despite its name, OpenAI—the company behind ChatGPT and the DALL-E image generator—builds AI systems that are decidedly closed.

“It’s clear that there are short-term and commercial incentives to oppose open source,” said Ilya Sutskever, chief scientist and founder of OpenAI, in a video interview hosted by Stanford University in April. But there is also a longer-term concern. He said it holds the potential for an AI system with “mind-bogglingly powerful” capabilities that would be too dangerous to make publicly available.

To defend the dangers of open source, Sutskever presented an artificial intelligence system that had learned to set up its own biological laboratory.

IS IT DANGEROUS?

Even current AI models pose risks and could be used, for example, to launch disinformation campaigns to disrupt democratic elections, said David Evan Harris, a researcher at the University of California, Berkeley.

“Open source is really great in so many dimensions of technology,” but artificial intelligence is different, Harris said.

“Anyone who has seen the movie ‘Oppenheimer’ knows this, that when big scientific discoveries are made, there are many reasons to think twice about how widely to share the details of all that information in ways that could fall into the wrong hands,” he said.

The Center for Humane Technology, a long-time critic of Meta’s social media practices, is one of the groups that draws attention to the risks of open source code or leaked AI models.

“As long as there are no guardrails in place right now, it’s completely irresponsible to release these models to the public,” said the group’s Camille Carlton.

IS IT FEAR?

There has been an increasingly public debate about the benefits or dangers of adopting an open source approach to the development of artificial intelligence.

Meta’s chief AI scientist Yann LeCun this fall took aim at OpenAI, Google and startup Anthropic’s social media “massive corporate lobbying,” characterizing the rules in a way that benefits their powerful AI models and could focus. their power in the development of technology. These three companies, together with OpenAI’s key partner Microsoft, have formed their own industry group called the Frontier Model Forum.

LeCun said on X, formerly Twitter, that he was concerned that other scientists’ fear of AI “doomsday scenarios” was giving ammunition to those who want to ban open source research and development.

“In a future where artificial intelligence systems are poised to form the repository of all human knowledge and culture, we must be open source and freely available so that everyone can contribute,” LeCun wrote. “Openness is the only way to make AI platforms reflect all human knowledge and culture.”

For IBM, an early supporter of the open-source Linux operating system in the 1990s, the controversy fuels a much longer race that predates the AI boom.

“It’s kind of a classic form of regulatory capture, trying to instill fear of open source innovation,” said Chris Padilla, who heads IBM’s global government affairs team. “I mean, this has been Microsoft’s model for decades, right? They were always against open source programs that could compete with Windows or Office. They’re taking a similar approach here.”

WHAT DO GOVERNMENTS DO?

It was easy to gloss over the “open source” debate in the debate surrounding US President Joe Biden’s sweeping AI executive order.

Biden’s order described the open designs with the technical name of “dual-purpose basic designs with widely available weights” and said they needed further research. Weights are numerical parameters that affect the behavior of the AI model.

When those weights are published on the Internet, “innovations can have significant benefits, but also significant security risks, such as the removal of safeguards from the model,” Biden’s order said. He gave US Commerce Secretary Gina Raimondo until July to talk with experts and make recommendations to manage potential benefits and risks.

The European Union has less time to figure it out. In talks that ended Wednesday, officials seeking to finalize passage of the world’s leading artificial intelligence rule are still debating several provisions, including one that could exempt certain “free and open source artificial intelligence components” from rules affecting commercial models.

Related posts

Leave a Comment