The CEO of OpenAI Inc., Sam Altman, expressed concerns about the potential risks associated with the rapid advancement of AI technology. However, he acknowledged that his company frequently deals with hazardous technology that could be utilized in harmful ways.
Altman has recently expressed concern about the potential for harm from increasingly powerful artificial intelligence technology. In an interview at the Bloomberg Technology Summit in San Francisco, he said global regulation can deal with big risks, but it shouldn’t be overdone.
OpenAI, maker of the wildly popular ChatGPT chatbot, is valued at more than $27 billion and is a leader in the booming field of backed AI companies. Altman talked about whether he would benefit financially from the success of OpenAI, saying “I have enough money” and was motivated by the potential benefits of the technology.
“This notion of having enough money is not something that is easy to convey to other people,” he said.
The CEO also said that he wants to promote human technological development with the help of artificial intelligence. “I think this is the most important step that humanity has to go through with technology,” Altman added. “And I really care about that.”
OpenAI is at the forefront of generative artificial intelligence technology, capable of generating text or images guided by just a few words of user prompts. The startup’s products — including ChatGPT and image generator Dall-E — have dazzled audiences. They’ve also helped spark a multibillion-dollar frenzy among venture capitalists and entrepreneurs racing to help lay the foundation for a new era of technology.
To generate revenue, OpenAI gives companies access to the APIs needed to create their own apps that leverage its AI models. The company also sells access to a premium version of the chatbot called ChatGPT Plus. OpenAI does not publish data on total sales.
People familiar with the matter have said that Microsoft has invested a total of 13 billion dollars in the company. Much of that will be used to pay back Microsoft for using its Azure cloud network to train and run OpenAI’s models.
The speed and power of the fast-growing AI industry has spurred governments and regulators to try to put guardrails on its development. It’s a venture that Altman himself has championed.
Altman was one of the artificial intelligence experts who met with President Joe Biden this week in San Francisco. The CEO has traveled widely speaking about AI, including in Washington, D.C., where he told US senators that “if this technology goes wrong, it can go wrong.”
Major AI companies, including Microsoft Corp. and Alphabet Inc.’s Google, have pledged to participate in an independent public review of their systems. But the US is also seeking more extensive regulation. The Commerce Department announced earlier this year that it is considering rules that could require AI models to go through a certification process before release.
Last month, Altman signed a brief statement, backed by more than 350 leaders and scientists, that said reducing the risk of AI-induced extinction should be a global priority, alongside other societal risks such as pandemics and nuclear war.
Despite dire warnings from tech leaders, some AI researchers argue that AI is not advanced enough to warrant fears that it will destroy humanity, and that focusing on doomsday scenarios is just a distraction from issues like algorithmic bias, racism and the risk of rampant proliferation. disinformation.
OpenAI’s ChatGPT and Dall-E, both released last year, have inspired startups to incorporate AI into many industries, including financial services, consumer goods, healthcare and entertainment. Bloomberg Intelligence analyst Mandeep Singh estimates that the generative AI market could grow by 42 percent and reach $1.3 trillion by 2032.