LLM
Context Contains Enough Information
Evaluate your LLM models on their ability to provide enough information to resolve a conversation.
Context sufficiency evaluation assesses whether the provided context contains enough information to adequately answer a given query. This evaluation is crucial for ensuring that your system has retrieved sufficient relevant information before generating a response.
Configuration
The evaluation requires the following configuration:
Parameter | Description |
---|---|
model | The model to be used for evaluation |
Test Case Setup
The evaluation requires both the query and the context that should contain sufficient information to answer it:
Client Setup
Initialize the evaluation client with your API credentials: