Sam Altman surprises saying ChatGPT has low trustworthiness
The world is being transformed by Generative AI, which has had a significant impact on numerous industries. As it continues to advance, its potential for influence is likely to expand. OpenAI’s public release of ChatGPT in November 2022 marked the beginning of this groundbreaking technology. However, even the founder and CEO of the company, Sam Altman, is skeptical of its precision. During a session at the Indraprastha Institute of Information Technology in Delhi, Altman humorously remarked that he trusts ChatGPT’s responses the least on Earth.
Sam Altman discusses AI hallucinations
Those who deal with generative AI understand a critical problem called AI hallucinations. Essentially, it refers to an AI’s confident answer that doesn’t seem to be justified by the data, either because it’s insufficient, biased, or inaccurate.
The problem can be problematic because generative AI is often used to create content, including news articles, analysis pieces, and more. AI hallucinations can be a big problem in such cases. In fact, ChatGPT is not free from this problem. That’s why Altman said that line as a joke.
But he also addresses the issue with a serious answer. When asked about the AI hallucination problem in ChatGPT and other GPT-based models, he said, “The problem is real. And we’re working on improving it. It takes us about a year to finalize the model. It’s a balance between creativity and accuracy, and we try to minimize the latter.” “
He also addressed the challenge of making artificial intelligence safe. Explaining what OpenAI is doing to ensure the creation of safe and responsible AI, Altman said: “There is no one-size-fits-all solution to securing AI. We improve the algorithm, conduct audits, strive to maintain strict parameters, use filters and more to design safe AI.”
In particular, Altman is on his way to six different countries, of which India was the first. He will also visit Israel, Jordan, Qatar, the United Arab Emirates and South Korea this week.