LLM
Groundedness
Evaluate your LLM models on their ability to be grounded in the context.
Groundedness evaluation assesses whether a model’s response is factually supported by and derived from the provided context. A grounded response should only contain information that can be directly traced back to the given context, avoiding hallucinations or unsupported claims.
Configuration
The evaluation requires the following configuration:
Parameter | Description |
---|---|
model | The model to be used for evaluation |
Test Case Setup
The evaluation requires both the model’s response and the context it should be grounded in:
Client Setup
Initialize the evaluation client with your API credentials:
Complete Example
The evaluation will return:
Pass
: If the response is fully grounded in the provided contextFail
: If the response contains information not supported by the context
This evaluation is particularly useful for:
- Verifying that responses only contain information from trusted sources
- Preventing model hallucinations
- Ensuring factual accuracy in generated content
- Validating RAG (Retrieval-Augmented Generation) systems