Eval Context Retrieval
Evaluates the quality of the context retrieved for generating a response. This evaluation ensures that the context used is relevant and sufficient to produce an accurate and coherent output.
Evaluation Using Interface
Input:
- Optional Inputs:
- input: The input column provided to the LLM that triggers the function call.
- output: Column which has the resulting function call or response generated by the LLM.
- context: The contextual information provided to the model.
Configuration Parameters:
- Criteria: Description of the criteria for evaluation
Output:
- Score: Percentage score between 0 and 100
Interpretation:
- Higher scores: Indicate that the context is well-suited for the task, while a low score suggests inadequacies in the context.
- Lower scores: Indicate that the context is not relevant or sufficient to produce an accurate and coherent output.
Evaluation Using Python SDK
Click here to learn how to setup evaluation using the Python SDK.
Input Type | Parameter | Type | Description |
---|---|---|---|
Optional | input | string | The input provided to the LLM that triggers the function call. |
output | string | Data which has the resulting function call or response generated by the LLM. | |
context | string or list[string] | The contextual information provided to the model. | |
Configuration Parameters | criteria | string | Description of the criteria for evaluation. |
Output | Type | Description |
---|---|---|
Score | float | Returns score between 0 and 1. |
What to do if Eval Context Retrieval Quality is Low
If the evaluation returns a low score, the criteria should be reviewed to ensure they are well-defined, relevant, and aligned with the evaluation’s objectives. Adjustments may be necessary to enhance clarity and comprehensiveness. The context should also be analysed for relevance and sufficiency, identifying any gaps or inadequacies and refining it as needed to better support the output.
Differentiating Eval Context Retrieval Quality with Context Adherence
Eval Context Retrieval Quality and Context Adherence serve different purposes. Eval Context Retrieval Quality assesses the overall quality and relevance of the retrieved context, ensuring it is sufficient and appropriate for generating a response. In contrast, Context Adherence focuses on whether the response strictly adheres to the provided context, preventing the introduction of external information.