Know all about AI hallucination, their impact on generative AI chatbots, and the strategies companies are coming up with to battle it. (AFP)News 

Generating Realistic Images with Artificial Intelligence Technology

Generative artificial intelligence (AI) is a revolutionary technology that holds immense potential, and experts believe we have only just begun to tap into its capabilities. It is not only utilized as a standalone model, but also employed creatively in various AI tools, including AI chatbots. However, a significant obstacle in its implementation and acceptance is AI hallucination, a challenge that even industry giants like Google, Microsoft, and OpenAI have grappled with and continue to face. So, what exactly is AI hallucination, how does it affect AI chatbots, and how are technology companies navigating this hurdle? Let’s delve into the details.

What is an AI hallucination?

AI hallucinations are basically events where an AI chatbot gives a wrong or nonsensical answer to a question. Sometimes the hallucinations can be blatant, for example recently Google Bard and Microsoft’s Bing AI falsely claimed that there has been a ceasefire in Israel during its ongoing conflict with Hamas. But sometimes it can be subtle enough that users without expert level knowledge can end up believing them. Another example is in the Bard, which asks “What country in Africa begins with K?” produces the answer “there are actually no countries in Africa that start with the letter K”.

The root cause of AI hallucinations

AI hallucinations can occur in large language models (LLMs) for several reasons. One of the main culprits seems to be the unfiltered massive amounts of data fed to AI models to train them. Since this information comes from fictional novels, unreliable websites and social media, it inevitably contains biased and incorrect information. Processing such data can often make an AI chatbot believe it to be true.

Another problem is the issues related to how the AI model processes and classifies the data in response to a prompt, which can often come from users without knowledge of the AI. Poor quality prompts can produce poor quality responses if the AI model is not built to process the data correctly.

How do technology companies approach the challenge?

There is currently no playbook for dealing with AI hallucinations. Each company tests its methods and systems to ensure that the occurrence of inaccuracies is significantly reduced. Microsoft recently published an article on the subject, highlighting that “models that are pre-trained to be good enough predictors (i.e. calibrated) may require post-training to mitigate arbitrary-fact-type hallucinations that usually occur once a training set”.

However, there are certain things that both technology companies and developers who rely on these tools can do to ensure that the problem is contained. IBM has recently published a detailed post on the problem of AI hallucinations. In the post, it has mentioned 6 points to combat this challenge. These are as follows:

1. Using high-quality training data – IBM emphasizes: “To prevent hallucinations, make sure AI models are trained with rich, balanced and well-structured data”. Typically, information obtained from the open Internet may contain misleading information and inaccuracies. Filtering training data can help improve such cases.

2. Determining the purpose of the AI model – “Specifying the use of the AI model – as well as any limitations on the use of the model – will help reduce hallucinations. Your team or organization should define the responsibilities and limitations of the selected AI system; this will help the system perform tasks more efficiently and minimize irrelevant, ‘hallucinatory’ results,” states IBM .

3. Use of data models – Data models provide a predetermined format for groups, improving the chances that the AI model will produce outputs according to the set guidelines. Relying on these models ensures consistency of results, reducing the risk of the model producing inaccurate results.

4. Limiting reactions – AI models can hallucinate because there is no limit to the possible outcomes. To improve consistency and accuracy, it is recommended to set limits on AI models using filtering tools or setting clear probability thresholds.

5. Continuous testing and refinement of the system – Thorough testing and continuous evaluation of the AI model is crucial to preventing hallucinations. These practices improve overall system performance, allowing users to adapt or retrain the model as data evolves over time.

6. Last but not least, IBM has emphasized human supervision as the best way to reduce the impact of AI biases.

Currently, this is an ongoing challenge that is unlikely to be resolved simply by changing the algorithm or structure of LLMs. The solution is expected to come when the technology itself matures and such problems can be understood on a deeper level.

Related posts

Leave a Comment