OpenAI, creator of ChatGPT, faces numerous legal obstacles beyond Elon Musk
After enjoying a year of worldwide recognition, the San Francisco-based company OpenAI is now facing numerous obstacles that could jeopardize its leading role in the field of artificial intelligence research.
Some of its controversy stems from decisions made well before ChatGPT’s debut, particularly its unusual transition from an idealistic nonprofit to a conglomerate backed by billions of dollars in investment.
It’s too early to say whether OpenAI and its lawyers will prevail against charges from Elon Musk, The New York Times, and bestselling authors like John Grisham, not to mention increased scrutiny from government regulators, or if any of it will stick.
Feud with Elon Musk
OpenAI is not waiting for the lawsuit to proceed before it publicly defends itself against legal claims made by billionaire Elon Musk, an early backer of OpenAI. He now claims it has betrayed its founding for-profit mission of benefiting humanity rather than seeking profit.
In its first response since suing Tesla’s CEO last week, OpenAI vowed to get the claim dismissed, releasing emails from Musk that purported to show he supported making OpenAI a for-profit company and even suggested a merger with the electric vehicle maker.
Legal experts have expressed doubts whether Musk’s claims, which focus on the alleged breach of contract, will hold up in court. But it has already forced the company to open internal conflicts over its unusual governance structure, how “open” it should be about its research and how to pursue so-called general artificial intelligence, or artificial intelligence systems that can perform as well as — or even better — than humans in a wide variety of tasks.
My own internal investigation
Much mystery remains about what prompted OpenAI to abruptly fire its founder and CEO Sam Altman in November, only to have him return days later with a new board to replace the one that ousted him. OpenAI tapped law firm WilmerHale to investigate the events, but it’s unclear how broad its scope will be and to what extent OpenAI will publish its findings.
One of the big questions is what OpenAI — under its previous board — meant in November when it said Altman “was not consistently forthright in his communications” in a way that hindered the board’s ability to fulfill its responsibilities. Although OpenAI is now primarily a for-profit company, it is still governed by a non-profit board charged with advancing its mission.
Researchers will likely look more closely at that structure and the internal conflicts that led to communication breakdowns, said Diane Rulke, a professor of organizational behavior and theory at Carnegie Mellon University.
Rulke said it would be “useful and very good practice” for OpenAI to make at least some of its findings public, especially given underlying concerns about how future AI technology will affect society.
“Not just because it was a big event, but because OpenAI works with a lot of companies, a lot of companies, and their impact is widespread,” Rulke said. “Even though they’re a privately held company, it’s very much in the public interest to know what happened at OpenAI.”
Government control
OpenAI’s close business relations with Microsoft have required scrutiny by US and European competition authorities. Microsoft has invested billions of dollars in OpenAI, deploying its massive computing power to help build AI models for smaller businesses. The software giant has also acquired the exclusive rights to a large part of the technology for Microsoft products.
Unlike a merger of large companies, such partnerships do not automatically trigger a government review. But the Federal Trade Commission wants to know whether such arrangements “allow dominant companies to exercise undue influence or gain preferential access in ways that may impair fair competition,” FTC Chairwoman Lina Khan said in January.
The FTC is awaiting responses to “mandatory orders” it has sent to both companies — as well as to OpenAI rival Anthropic and its own cloud computing backers, Amazon and Google — requiring them to provide information about the partnerships and their decision-making. The companies’ answers are coming next week. Similar checks are carried out in the European Union and the United Kingdom.
Copyright stuff
Top authors, nonfiction writers, The New York Times, and other media outlets have sued OpenAI over allegations that the company violated copyright laws when building the large AI language models that underpin ChatGPT. Several lawsuits also target Microsoft. (The Associated Press took a different approach last year to secure an agreement to give OpenAI access to AP’s text archive for an undisclosed fee).
OpenAI has argued that its practice of training artificial intelligence models on vast amounts of writing found on the Internet is protected by the “fair use” doctrine of copyright law. Federal judges in New York and San Francisco must now sort through evidence brought by numerous plaintiffs, including Grisham, comedian Sarah Silverman and Game of Thrones author George R.R. Martin.
The stakes are high. For example, The Times is asking a judge to order them to “destroy” all of OpenAI’s major GPT language models—the foundation of ChatGPT and the bulk of OpenAI’s business—if they were trained in its news articles.