When AI models generate responses, sometimes the outputs stray from the original instructions or context, creating fabricated or irrelevant content. This phenomenon is known as hallucination.Hallucinations can be frustrating, especially when the AI’s reliability matters. Whether you’re building a chatbot, summarising documents, or generating answers, identifying hallucinations is key to improving accuracy and trust in your AI system.Hallucinations happen when the output:
Doesn’t follow the user’s instructions.
Introduces information that isn’t part of the given context.
Strays into unrelated topics or makes unsupported claims.
That is why detecting hallucination matters because it ensures accuracy by reducing errors in AI-generated content, which is critical for applications like customer service, research, or education and it can help builds trust with its consistent and grounded responses, thus making users confident in the AI system’s capabilities.Following are the evals to identify hallucinations in AI-generated text content:
Export your API key and Secret key into your environment variables.
Copy
result = evaluator.evaluate( eval_templates="prompt_instruction_adherence", inputs={ "output": "Honey doesn’t spoil because its low moisture and high acidity prevent the growth of bacteria and other microbes." }, model_name="turing_flash")print(result.eval_results[0].output)print(result.eval_results[0].reason)
result = evaluator.evaluate( eval_templates="context_adherence", inputs={ "context": "Honey never spoils because it has low moisture content and high acidity, creating an environment that resists bacteria and microorganisms. Archaeologists have even found pots of honey in ancient Egyptian tombs that are still perfectly edible.", "output": "Honey doesn’t spoil because its low moisture and high acidity prevent the growth of bacteria and other microbes." }, model_name="turing_flash")print(result.eval_results[0].output)print(result.eval_results[0].reason)