An LLM that outputs incorrect information is said to be “hallucinating”, and there is now a growing research effort towards minimising this effect. (Pixabay)AI 

Humans and AI Experience Hallucinations Differently

Over the last six months, the introduction of highly proficient large language models (LLMs) like GPT-3.5 has generated significant attention. Nevertheless, users have become less confident in these models as they have realized that they are prone to errors and, like humans, are not infallible.

LLM producing false information is said to be “hallucinating”, and there is now a growing body of research to minimize this effect. But as we grapple with this task, it’s worth considering our own biases and hallucinations—and how this affects the accuracy of the LLMs we create.

By understanding the connection between AI’s hallucinatory potential and our own potential, we can begin to create smarter AI systems that will ultimately help reduce human error.

How people hallucinate

It’s no secret that people make up information. Sometimes we do this on purpose and sometimes unintentionally. The latter is the result of cognitive biases or “heuristics”: mental shortcuts we develop through past experiences.

These shortcuts are often born out of necessity. At any given moment, we can only process a limited amount of the information flooding our senses and remember only a fraction of all the information we’ve ever been exposed to.

As such, our brains must use learned associations to fill in the gaps and quickly answer any question or puzzle we face. In other words, our brain guesses what the correct answer might be based on limited information. This is called “confabulation” and is an example of human bias.

Our biases can lead to poor judgment. Take automation bias, which is our tendency to prefer information produced by automated systems (such as ChatGPT) over information from non-automated sources. This bias can cause us to notice mistakes and even act on false information.

Another relevant heuristic is the halo effect, where the initial impression of something affects subsequent interactions with it. And the fluency bias, which describes how we prefer information presented in an easy-to-read manner.

The bottom line is that human thinking is often colored by our own cognitive biases and distortions, and these “hallucinatory” tendencies occur largely outside of our awareness.

How AI hallucinates

In the LLM context, hallucination is a different matter. LLM does not try to conserve limited mental resources to understand the world effectively. “Hallucination” in this context only describes a failed attempt to predict an appropriate response to the input.

Nevertheless, there is still some similarity between the hallucinations of humans and LLMs, as LLMs also do this to “fill in the gaps”.

LLMs generate an answer by predicting which word is most likely to occur next in a sequence, based on what has happened and the associations the system has learned through training.

Like humans, LLMs try to predict the most likely answer. Unlike humans, they do this without understanding what they are saying. That way they can end up producing nonsense.

There are several factors as to why LLMs hallucinate. Most of them are trained on information that is incomplete or insufficient. Other factors include how the system is programmed to learn from this information and how this programming is strengthened by the guidance of people with further training.

Better together

So if both humans and LLMs are prone to hallucinations (albeit for different reasons), which is easier to fix?

Fixing the educational information and processes that underpin LLMs may seem easier than fixing ourselves. But this ignores the human factors that affect AI systems (and is an example of yet another human bias known as the fundamental attribution error).

The reality is that our failures and our technology’s shortcomings are inextricably linked, so fixing one helps fix the other. Here are a few ways we can do this.

Responsible information management. AI bias often results from biased or limited training data. Ways to address this include ensuring the diversity and representativeness of the training data, building bias-aware algorithms, and implementing techniques such as data balancing to remove skewed or discriminatory patterns.

Transparency and explainable AI. However, despite the above measures, AI biases may remain and may be difficult to detect. By studying how biases can enter and propagate in a system, we can better explain the presence of biases in outputs. This is the basis of “explanatory AI”, which aims to make the decision-making processes of AI systems more transparent.

Public interests first. Recognizing, managing and learning from artificial intelligence biases requires people to be responsible and integrate human values into artificial intelligence systems. Achieving this requires that stakeholders represent people from different backgrounds, cultures and perspectives.

By working in this way, we can build smarter AI systems that can help keep our hallucinations in check.

Artificial intelligence is used, for example, in healthcare to analyze people’s decisions. These machine learning systems detect inconsistencies in human data and provide prompts that bring them to the doctor’s attention. As such, diagnostic decisions can be improved while maintaining human responsibility.

In the context of social media, AI is being used to help train human moderators as they try to identify abuse, for example through the Troll Patrol project, which aims to tackle online violence against women.

In another example, combining artificial intelligence and satellite imagery can help researchers analyze differences in nighttime lighting between regions and use this as an indication of relative poverty in a region (where more lighting correlates with less poverty).

Importantly, while we do essential work to improve the accuracy of LLMs, we should not ignore how their current fallibility mirrors ours.

Related posts

Leave a Comment