The seven AI tech companies are Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. All of them have agreed to implement voluntary AI safeguard measures. (AFP)AI 

ChatGPT Creator Sam Altman’s Vision for AI Realized as Major Tech Companies Follow Suit

Following its immense popularity in recent months, ChatGPT’s founder, Sam Altman, has consistently expressed his strong desire for government-backed AI regulations. Altman went as far as touring different countries, urging their governments to take action. Now, his wishes seem to be coming true as several major tech companies, along with US President Joe Biden, have pledged to implement adequate safeguards in the development of their AI tools. Concerns about the potential “destruction of humanity” in an unregulated AI environment have been voiced by influential figures such as AI pioneer Geoffrey Hinton and billionaire Elon Musk.

This new breakthrough in the artificial intelligence space came on Friday after seven leading technology companies that work heavily with artificial intelligence reached an agreement to implement safeguards in their products. These seven companies are Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI. The announcement came directly from the White House as US President Joe Biden gave a speech highlighting the areas where these companies are building security measures. This agreement is considered the first major step towards regulatory compliance in artificial intelligence.

According to a report by The New York Times, the seven companies formally committed to new standards of trust, safety and security during a meeting with Biden. The meeting was held on Friday afternoon.

Speaking, the President of the United States said: “Just two months ago, Kamala (Harris) and I met with all these leaders to emphasize responsibility and to make sure that the products they manufacture are safe and to publicize what they are and what they are not… Today I am pleased to announce that these seven companies have committed to voluntary responsible innovation.

The US president highlights areas where safeguards are needed for artificial intelligence

Explaining that all AI innovation must seriously emphasize the three basic principles of safety, security and trust, Biden spoke of the fundamental commitment the government was asking from the seven companies. He also highlighted four specific commitments he asked from the seven companies.

First, companies were required to ensure the safety of the technology before making it public. This includes testing the capabilities of the systems, assessing potential risks and making the results of these assessments public.

Second, companies were asked to prioritize system security by securing their models from cyber threats and managing risks.

Third, companies were tasked with earning people’s trust and empowering users to make informed decisions by flagging content that was either altered or created by AI. This would also include eradicating prejudice and discrimination, strengthening privacy protections and protecting children from harm.

Finally, companies agreed to find ways for AI to address society’s biggest challenges, from cancer to climate change, investing in education and new jobs to help students and workers succeed.

“These commitments are real and tangible,” Biden added.

Related posts

Leave a Comment