ChatGPT's creator OpenAI plans to invest significant resources and create a new research team that will seek to ensure its artificial intelligence remains safe for humans eventually using AI to supervise itself, it said on Wednesday.AI 

OpenAI Strengthens Efforts to Stop AI From ‘Misbehaving’

OpenAI announced on Wednesday its intention to allocate substantial resources and establish a fresh research team dedicated to safeguarding the safety of its artificial intelligence. The organization aims to eventually employ AI to autonomously supervise itself.

“The enormous power of superintelligence could… lead to the incapacitation of humanity or even the extinction of humans,” OpenAI founder Ilya Sutskever and chief policy officer Jan Leike wrote in a blog post. “Currently, we don’t have a solution to control or guide a potentially super-intelligent AI and stop it from going rogue.”

Superintelligent artificial intelligence — systems smarter than humans — may arrive this decade, the authors of the blog post predicted. Humans will need better technologies to control superintelligent AI, which is why breakthroughs are needed in so-called “alignment research” that focuses on making sure AI continues to be useful to humans, according to the authors.

Microsoft-backed OpenAI will use 20 percent of its computing power over the next four years to solve this problem, they wrote. Additionally, the company is forming a new team to organize around this effort, called the Superalignment team.

The team’s goal is to create a “human-level” AI alignment researcher and then scale it through massive amounts of computing power. OpenAI says that means they train AI systems using human feedback, train AI systems to assist in human evaluation, and eventually train AI systems to actually perform targeting research.

AI safety advocate Connor Leahy said the plan was fundamentally flawed because the original human-level AI could implode and wreak havoc before it was forced to address the AI’s safety issues.

“You have to solve alignment before you build human-level intelligence, otherwise you don’t control it by default,” he said in an interview. “I personally don’t think this is a particularly good or safe plan.”

The potential dangers of artificial intelligence have been on the minds of both artificial intelligence researchers and the general public. In April, a group of AI industry leaders and experts signed an open letter calling for a six-month pause in the development of systems more powerful than OpenAI’s GPT-4, citing potential societal risks. According to a May Reuters/Ipsos poll, more than two-thirds of Americans are concerned about the potential negative effects of artificial intelligence, and 61 percent believe it could threaten civilization.

Related posts

Leave a Comment