From Microsoft to iris.ai, companies are realizing the need to eliminate AI hallucinations in order to build more accurate and trustworthy tools. (Pexels)AI 

Tech Companies Tackle AI Chatbot Hallucination Issue

Undoubtedly, generative artificial intelligence (AI) has emerged as a groundbreaking technology, yet its true potential remains largely unexplored. Similar to other technologies, it is expected to become increasingly potent and influential through ongoing research and integration with existing systems. However, a significant hurdle faced by AI researchers and tech companies developing AI tools is the issue of AI hallucination, impeding its widespread adoption and eroding user trust.

What is an AI hallucination?

AI hallucinations are basically cases where an AI chatbot gives a false or nonsensical answer to a question. Sometimes the hallucinations can be blatant, for example recently Google Bard and Microsoft’s Bing AI falsely claimed that there has been a ceasefire in Israel during its ongoing conflict with Hamas. But sometimes it can be subtle enough that users without expert level knowledge can end up believing them.

We are now on WhatsApp. Click to join.

Artificial Intelligence The Root Cause of Hallucinations

AI hallucinations can occur in large language models (LLMs) for several reasons. One of the main culprits seems to be the unfiltered massive amounts of data fed to AI models to train them. Since this information comes from fictional novels, unreliable websites and social media, it inevitably contains biased and incorrect information. Processing such data can often make an AI chatbot believe it to be true.

Another problem is the issues related to how the AI model processes and classifies the data in response to a prompt, which can often come from users without knowledge of the AI. Poor quality prompts can produce poor quality responses if the AI model is not built to process the data correctly.

What are companies doing to solve the AI hallucination bottleneck?

Whenever a new technology appears, it comes with its own set of problems. This is true for any technology. So in this regard, AI is no different. What set it apart from other such technologies was the speed of adoption. Generally, techniques are not implemented until all loose screws are tightened. However, due to the huge popularity of AI ever since OpenAI launched ChatGPT in November 2022, companies didn’t want to miss out on the hype and wanted to get their products to market as soon as possible.

But now many companies are realizing the mistake and are working on creating reliable creative AI chatbots. Microsoft is one of them. In September, it announced its Phi-1.5 model, which is trained on “textbook-quality” data, rather than traditional online data, to ensure that input data is free of inaccuracies.

Another solution has been presented by an Oslo-based startup, iris.ai. The company’s CTO Victor Botev recently spoke with TheNextWeb and suggested that another way to solve the problem of AI hallucinations is to train a model in a coding language. Botev believes that since human written text is prone to fallacy, coding language is a better option because it is based on logic and leaves very little room for interpretation. This can give LLMs a structured way to combat inaccuracies.

It’s still in its early stages, and as researchers and technology companies get to know AI tools, more effective solutions will emerge to make AI more accurate and reliable for the general public.

One more thing! ReturnByte is now on WhatsApp channels! Follow us by clicking the link to never miss any updates from the world of technology. Click here to join now!

Related posts

Leave a Comment