Skip to main content
result = evaluator.evaluate(
    eval_templates="groundedness",
    inputs={
        "input": "The Earth orbits around the Sun in how many days?",
        "context": "The Earth completes one orbit around the Sun every 365.25 days",
        "output": "365.25 days"
    },
    model_name="turing_flash"
)

print(result.eval_results[0].output)
print(result.eval_results[0].reason)
Input
Required InputTypeDescription
outputstringThe output generated by the model
contextstringThe context provided to the model
Optional Input
inputstringThe input provided to the model
Output
FieldDescription
ResultReturns a score, where higher values indicate better grounding in the input
ReasonProvides a detailed explanation of the groundedness assessment

What to do when Groundedness Evaluation Fails

If the evaluation fails, the Context Review should reassess the provided context for completeness and clarity, ensuring it includes all necessary information to support the response. In Response Analysis, the response should be examined for any elements not supported by the context, and adjustments should be made to improve alignment with the given information.

Differentiating Groundedness from Context Adherence

While both evaluations assess context alignment, Groundedness ensures that the response is strictly based on the provided context, whereas Context Adherence measures how well the response stays within the context without introducing external information. Both evaluations require a response and context as inputs and produce a Pass/Fail output based on adherence to the provided information.
I