Technology

What are AI hallucinations?

It is a phenomenon where large language models (LLMs), such as chatbots, generate false information while it is presented in such a way that it is factual information. Some examples of popular LLMs are GPT-3.5, GPT-4, BARD, Cohere, PaLM, and Claude v1. These LLMs can be found when using such tools as ChatGPT, Google Bard, Llama, Bing Chat, Copilot, Imagen, Titan, Amazon CodeWhisperer, Titan and Bloom. Since LLMs are AI models that learn to generate outputs, commonly texts, they can be trained on a massive amount of data, such as code, images, audio and video.

What is causing it to hallucinate?

Some of the studies showed LLMs lack reasoning skills. However, recent research by Cornell University has shown that LLMs have amazing reasoning abilities for complex tasks. The struggle is that they are lacking in the latest knowledge, which affects reasoning. The incorrect reasoning processes that lead to inaccurate output by these models are a sign that they are hallucinating.

If we are not careful when using their output without analyzing it, it will be a huge problem. Just last month, one of the popular chatbots was experiencing AI hallucinations, and that was Google Bard. It caused major chaos to give out false information on the ceasefire in Israel during the Israel-Hamas conflicts and it has predicted the death toll. Those who have trust issues will check on the AI outputs, which is an advantage to them.

Some other reasons for its hallucinations are:

  • Learning patterns or features from training data inaccurately
  • Training data is low in quality, outdated, biased, inaccurate, or insufficient
  • Using unfamiliar words, such as idioms or slang
  • Trained on limited datasets, unable to generalize new data
  • Insufficient programming
  • High complexity of model

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button