US Government Making Efforts to Establish Crucial AI Standards for Secure Development
The Biden administration said Tuesday it is taking the first step toward writing key standards and guidelines for the safe deployment of generative artificial intelligence and how to test and secure the systems.
The Commerce Department’s National Institute of Standards and Technology (NIST) said it will seek public input until February 2nd to conduct key testing that is critical to ensuring the safety of artificial intelligence systems.
Commerce Secretary Gina Raimondo said the effort stemmed from President Joe Biden’s October executive order on artificial intelligence and seeks to develop “industry standards for the safety, security and trust in artificial intelligence that will enable America to continue to lead the world in the responsible development and use of this rapidly evolving technology.”
The agency develops guidelines for the evaluation of artificial intelligence, facilitates the development of standards and provides testing environments for the evaluation of artificial intelligence systems. The request asks AI companies and the public for input on managing AI risks and reducing the risks of false information generated by AI.
Generative AI — which can create text, photos and videos in response to open-ended prompts — has sparked excitement and fears in recent months that it could obviate some jobs, overturn elections and potentially win over people, with catastrophic consequences.
Biden’s order directed agencies to set standards for that testing and address the associated chemical, biological, radiation, nuclear and cybersecurity risks.
NIST is working to set guidelines for testing, including where a so-called “red team” would be most useful in assessing and managing AI risks, and to determine best practices for doing so.
External red teaming has been used for years in cyber security to identify new risks, and the term refers to US Cold War simulations where the enemy was referred to as a “red team”.
In August, the first public “red-teaming” evaluation event in the US was held during a major cybersecurity conference, organized by AI Village, SeedAI, Humane Intelligence.
Thousands of participants tried to determine whether they could cause the systems to produce undesired results or otherwise fail, with the goal of better understanding the risks these systems pose, the White House said.
The event “demonstrated how external red-teaming can be a powerful tool for identifying new AI risks,” it added.