AI Safety Group Established by Anthropic, Google, Microsoft and OpenAI
The fact that AI development poses significant security risks is widely known. Although governing bodies are making efforts to establish regulations, currently, it primarily falls upon companies to take necessary precautions. In a recent demonstration of self-regulation, Anthropic, Google, Microsoft, and Open AI have collaborated to establish the Frontier Model Forum. This industry-led organization focuses on promoting safe and cautious AI development, particularly in relation to frontier models. These models, defined as large-scale machine-learning models surpassing current capabilities and possessing a wide range of abilities, are the primary concern of the forum.
The forum plans to establish an advisory committee, a charter and funding. It has defined the core pillars it plans to focus on: advancing AI security research, identifying best practices, working closely with policymakers, researchers, civil society and business, and encouraging efforts to build AI that “can address society’s greatest challenges.”
Members are said to be working on the first three goals over the next year. Speaking of membership, the announcement outlines the qualifications needed to join, such as producing boundary models and a clear commitment to securing them. “It’s vital that AI companies—especially those working on the most powerful models—come to a common ground and promote thoughtful and adaptive security practices to ensure the broadest possible benefit of powerful AI tools,” Anna Makanju, vice president of global affairs at OpenAI, said in a statement. “This is urgent work and this forum has good opportunities to act quickly to advance the state of AI security.”
The establishment of the forum is the result of a security agreement between the White House and top AI companies, including this new project. Committed security measures included malfunction tests by external experts and watermarking of AI-generated content.