FutureAGI
Context Evaluation
Evaluate how effectively context is used in generating responses.
Context evaluation encompasses multiple metrics to assess how effectively context is used in generating responses. These evaluations help validate context retrieval, utilization, and adherence in RAG (Retrieval-Augmented Generation) systems.
Available Context Evaluations
Evaluation Type | Description |
---|---|
ContextRetrieval | Assesses quality of retrieved context |
ContextAdherence | Measures how well responses stay within provided context |
Required Parameters
Parameter | Description | Required |
---|---|---|
input | The input query or prompt | Yes |
output | The generated response | Yes |
context | The context used to generate the output | Yes |
Configuration
The evaluation accepts the following configuration:
Parameter | Description | Required | Default |
---|---|---|---|
check_internet | Whether to verify facts against internet sources | No | false |
model | The LLM model to use for evaluation | Yes | - |