The organization is facing an FTC investigation.AI 

OpenAI’s Trust and Safety Lead Departs

Dave Willner, OpenAI’s trust and safety lead, has stepped down from his position, as confirmed in a post on Linkedin. Although he will continue to serve in an advisory capacity, Willner has encouraged his Linkedin followers to contact him regarding any relevant opportunities. He explained that his decision to leave was driven by a desire to dedicate more time to his family. Despite the common cliché, Willner provided specific details to support his reasoning.

“In the months since the launch of ChatGPT, I have found it increasingly difficult to stay on my contract,” he writes. “OpenAI is going through an intense phase of development – and so are our children. Anyone with young children and a very intense job can relate to this tension.

He still says he’s “proud of everything” the company has accomplished during his tenure, noting it was “one of the coolest and most interesting jobs” in the world.

Of course, this transition comes hot on the heels of some legal hurdles facing OpenAI and its signature product, ChatGPT. The FTC recently opened an investigation into the company because it suspects it is violating consumer protection laws and engaging in “unfair or deceptive” practices that could harm the public’s privacy and safety. There is a flaw in the research that has leaked users’ private information, which certainly seems to fall under the realm of trust and security.

Willner says his decision was actually “a fairly easy choice, though not one that people in my position often make so clearly in public.” He also says he hopes his decision will help normalize a more open discussion about work-life balance.

AI security has been the subject of concern in recent months, and OpenAI is one of the companies that agreed to put certain safeguards on its products at the behest of President Biden and the White House. These include giving access to code to independent experts, reporting societal risks such as bias, sharing security information with the administration, and watermarking audio and visual content so people know it’s created by AI.

Related posts

Leave a Comment