OpenAI Launches Initiative to Mitigate Potential Dangers of Artificial Intelligence
OpenAI is launching a fresh initiative to address the growing concerns surrounding artificial intelligence, with a dedicated team focused on mitigating potential risks associated with the advancing capabilities of this rapidly evolving technology.
In a blog post Thursday, the company, best known for its popular chatbot ChatGPT, said it has formed a “standby team” led by Aleksander Madry, who has worked on OpenAI while on leave from a faculty position at the Massachusetts Institute of Technology. The group analyzes and attempts to combat potential “catastrophic risks” of AI systems, ranging from cybersecurity issues to chemical, nuclear and biological threats.
The team is also developing a policy designed to help the company determine how it can mitigate the risks that may arise from developing so-called “edge models” — next-generation AI technology that is more powerful than usual. in use today.
OpenAI’s goal has long been focused on building general artificial intelligence, or AGI, that can perform many tasks better than humans. While current AI systems don’t fit that description, the company said it needs to “make sure we have the understanding and infrastructure necessary to secure high-performance AI systems.