FTC Launches Probe into OpenAI, Developer of ChatGPT
The US regulators are intensifying their efforts to regulate generative AI. According to The Washington Post, the Federal Trade Commission (FTC) has initiated a probe into OpenAI, the developer of ChatGPT and DALL-E. The FTC has asked for documentation demonstrating how OpenAI addresses the potential risks associated with its extensive language AI models. The concern is that the company might be engaging in practices that are “unfair or deceptive,” potentially compromising the privacy, security, or reputation of the public.
The commission is particularly interested in data related to the bug that leaked sensitive data of ChatGPT users, such as payments and chat history. Although OpenAI said the number of users was very small, the FTC is concerned that this is due to poor data security practices. The agency also wants data on complaints alleging that AI has made false or harmful statements about individuals, as well as data on how well users understand the accuracy of the products they use.
We have asked OpenAI for comment. The FTC declined to comment and doesn’t usually comment on investigations, but has previously warned that generative AI could run afoul of the law by causing more harm than good to consumers. It can be used, for example, for scams, misleading marketing campaigns or lead to discriminatory advertising. If an authority finds a company has broken the rules, it can impose fines or issue consent decrees that force certain practices.
AI-specific laws and regulations are not expected in the near future. Nevertheless, the government has increased pressure on the technology industry. OpenAI CEO Sam Altman testified before the Senate in May, where he defended his company by outlining privacy and security measures while touting the purported benefits of artificial intelligence. He said that safeguards were in place, but that OpenAI would be “increasingly cautious” and would continue to update its safeguards.
It’s not clear whether the FTC is targeting other generative AI developers like Google and Anthropic. However, the OpenAI study shows how the commission could approach other cases and shows that the regulator is serious about monitoring AI developers.