MLOps Live

Join our webinar on Improving LLM Accuracy & Performance w/ Databricks - Tuesday 30th of April 2024 - 12 noon EST

What are LLM Hallucinations?

LLM hallucinations are the events in which ML models, particularly large language models (LLMs) like GPT-3 or GPT-4, produce outputs that are coherent and grammatically correct but factually incorrect or nonsensical. “Hallucinations” in this context means the generation of false or misleading information. These hallucinations can occur due to various factors, such as limitations in training data, biases in the model, or the inherent complexity of language.

In the example below you can see an amusing, albeit mostly harmless, example. However, LLM hallucinations are particularly concerning in fields that require high levels of accuracy and have a significant impact on people’s lives, like healthcare, law, or engineering.

Therefore, it’s essential to cross-reference the information provided by LLMs.

An LLM hallucination in action:

What Causes LLMs Hallucinations?

LLM hallucinations can be caused due to a variety of factors. These include:

  • Incomplete or Noisy Training Data – Lack of complete, relevant, correct, updated, or accurate data in the dataset can lead to gaps or mistakes in the model’s understanding. Consequently, the generated results are erroneous as well.
  • Vague Questions – If the input question or prompt is ambiguous, the model might generate a response based on what it considers the most likely interpretation, which may not align with the user’s intent.
  • Overfitting and Underfitting – Overfitting the training data can make the model too specific, whereas underfitting can make it too general, both of which can lead to hallucinations.
  • Inherent Biases – Models can inherit biases present in the training data, leading them to make assumptions that could result in hallucinations.
  • Absence of Grounding – Unlike humans, these models don’t have real-world experiences or the ability to access real-time data, which limits their understanding and can cause errors.
  • Semantic Gaps – While LLMs are good at pattern recognition, they often lack “common sense” reasoning abilities, which can also contribute to hallucinations.

How Can LLM Hallucinations Be Prevented?

Preventing hallucinations in LLMs like GPT-3 or GPT-4 is an important task. However, it is not an easy one. Here are some strategies that can help:

  • Curated Datasets – Use high-quality, verified datasets for fine-tuning the model. The more accurate the training data, the less likely the model is to generate hallucinations.
  • Output Filtering – Implement mechanisms to filter or flag potentially incorrect or hallucinated outputs based on certain criteria like statistical likelihood or adherence to a domain-specific rule set.
  • Feedback Mechanism – Establish a real-time user feedback system, like re-inforcement learning. If the model produces a hallucination, users can flag the incorrect information, which can be used for further fine-tuning.
  • Iterative Fine-Tuning – Continuously update the model by fine-tuning it with a more recent and accurate dataset.
  • Ongoing Monitoring – Continuously monitor the model’s performance to catch any new types of hallucinations that may emerge.
  • Cross-Reference – For critical applications, cross-reference the model’s outputs with verified information sources.
  • Domain-Specific Training – In fields like healthcare or law, involving experts in the fine-tuning process can help the model learn the subtleties and complexities of the domain, reducing the likelihood of generating incorrect information.
  • Scope Definition – Clearly define the scope of tasks that the model is designed to assist with and caution users against relying on it for tasks outside that scope.

It’s important to remember that hallucinations cannot be completely eliminated. Therefore, it’s essential to be cautious of these limitations and have a human review and cross-reference the results when using LLMs for critical applications.

Why is It Important to Prevent LLM Hallucinations? What are the Ethical Concerns of LLM Hallucinations?

LLM hallucinations raise ethical concerns, since hallucinations could have a substantial impact on individuals’ well being and society’s overall health and stability. Some key concerns include:

  • Spreading False Information –  If an LLM produces a hallucinated answer that is factually incorrect but appears plausible, it can contribute to the spread of misinformation. This is especially problematic in contexts like news, healthcare, and legal advice.
  • Impaired Judgment – People often use LLMs for supporting their decision-making processes. Hallucinations can lead to erroneous decisions with potentially severe consequences, especially in critical fields like medicine, finance and public policy.
  • Trust and Reliability – Frequent hallucinations can breach users’ trust not just in the specific model but also in AI technologies more broadly. This can impede their adoption and utility.
  • LLM Bias – LLMs can perpetuate harmful stereotypes and social stigmas when hallucinations are rooted in biases in the training data. This can encourage discrimination and violence.

LLM Hallucinations and Tokens

Tokens in the context of LLMs are the smallest units of text that the model processes. Words or parts of words are broken down into tokens, which the model uses to understand and generate text. Each token is a piece of the puzzle that the model pieces together to form coherent and contextually appropriate responses.

However, the number of tokens a model can process in one prompt is limited, which is why sometimes complex topics or longer texts need to be broken down into smaller chunks. This truncation can result in the model losing crucial details, increasing the chances of producing inconsistent or hallucinated responses. To mitigate these risks, it’s important to design proper prompts and understand the domain of token fragmentation.

LLM Hallucinations and MLOps

Organizations can improve the reliability and trustworthiness of their LLMs by managing LLM artificial intelligence in the MLOps pipeline. This is also known as LLMOps . LLMOps best practices include:

  • Monitoring – Continuous monitoring of the model’s performance can help detect AI hallucinations in real-time.
  • Tracking Hallucination-prone Versions – MLOps tools often include version control for ML models. If a particular version is found to be prone to hallucinations, it can be rolled back or adjusted.
  • Automated Data Validation – MLOps practices include automated checks for data quality. Improving the quality of training and fine-tuning data can mitigate the risks of hallucinations.
  • Feedback Loops – An MLOps pipeline can include mechanisms for collecting and integrating user feedback. This feedback can be invaluable for identifying and understanding the types of queries or contexts that are most likely to induce hallucinations.
  • Automated Retraining – Based on the detection of hallucinations and the collection of new, corrected data, MLOps can facilitate the automated retraining of the model to improve accuracy over time.
  • Custom Metrics for Hallucination – Traditional metrics like accuracy, precision and recall might not capture the likelihood of a model to hallucinate. MLOps allows for the implementation of custom evaluation metrics that can.
  • Accountability and Compliance – MLOps ensures that there’s an audit trail for model behavior. In regulated industries, knowing when and why a model hallucinated can be important for meeting compliance requirements
  • Scaling Safeguards – As ML models are scaled across different parts of an organization or for various applications, MLOps can help ensure that safeguards against hallucinations are consistently applied.
  • Collaboration – MLOps encourages collaboration between Data Scientists, ML Engineers, and domain experts. Working together can help address hallucination when they occur.