Evaluates how well responses stay within the provided context by measuring if the output contains any information not present in the given context. This evaluation is crucial for ensuring factual consistency and preventing hallucination in responses.
Click here to learn how to setup evaluation using SDK.Input:
string
- The output column generated by the model.string
- The context column provided to the modelfloat
- Returns score between 0 and 1