Skip to main content
result = evaluator.evaluate(
    eval_templates="detect_hallucination",
    inputs={
        "context": "Honey never spoils because it has low moisture content and high acidity, creating an environment that resists bacteria and microorganisms. Archaeologists have even found pots of honey in ancient Egyptian tombs that are still perfectly edible.",
        "output": "Honey doesn’t spoil because its low moisture and high acidity prevent the growth of bacteria and other microbes."
    },
    model_name="turing_flash"
)

print(result.eval_results[0].output)
print(result.eval_results[0].reason)
Input
Required InputDescription
outputOutput generated by the model
contextThe context provided to the model
Optional Input
inputInput provided to the model
Output
FieldDescription
ResultReturns Passed if no hallucination is detected, Failed if hallucination is detected
ReasonProvides a detailed explanation of the evaluation

What to do If you get Undesired Results

If the content is evaluated as containing hallucinations (Failed) and you want to improve it:
  • Ensure all claims in your output are explicitly supported by the source material
  • Avoid extrapolating or generalizing beyond what is stated in the input
  • Remove any specific details that aren’t mentioned in the source text
  • Use qualifying language (like “may,” “could,” or “suggests”) when necessary
  • Stick to paraphrasing rather than adding new information
  • Double-check numerical values, dates, and proper nouns against the source
  • Consider directly quoting from the source for critical information

Comparing Detect Hallucination with Similar Evals

  • Factual Accuracy: While Detect Hallucination checks for fabricated information not in the source, Factual Accuracy evaluates the overall factual correctness of content against broader knowledge.
  • Groundedness: Detect Hallucination focuses on absence of fabricated content, while Groundedness measures how well the output is supported by the source material.
  • Context Adherence: Detect Hallucination identifies made-up information, while Context Adherence evaluates how well the output adheres to the given context.