Context Relevance
Evaluates whether the provided context is sufficient and relevant to answer the given input query. This evaluation is crucial for RAG systems to ensure that retrieved context pieces contain the necessary information to generate accurate responses.
Evaluation Using Interface
Input:
- Required Inputs:
- context: The context column provided to the model
- input: The input column provided to the model
- Configuration parameters:
- Check Internet: Whether to check the internet for relevant information
Output:
- Score: Percentage score between 0 and 100
Interpretation:
- Higher scores: Indicate that the context is more relevant to the query.
- Lower scores: Suggest that the context is less relevant to the query.
Evaluation Using Python SDK
Click here to learn how to setup evaluation using the Python SDK.
Input:
- Required Inputs:
- context:
string
- The context column provided to the model - input:
string
- The input column provided to the model
- context:
- Configuration parameters:
- Check Internet:
bool
- True/False (Whether to check the internet for relevant information)
- Check Internet:
Output:
- Score:
float
- Returns score between 0 and 1
Interpretation:
- Higher scores: Indicate that the context is more relevant to the query.
- Lower scores: Suggest that the context is less relevant to the query.
What to do when Context Relevance is Low
When context relevance is low, the first step is to identify which parts of the context are either irrelevant or insufficient to address the query effectively.
If critical information is missing, additional details should be incorporated to ensure completeness. At the same time, any irrelevant content should be removed or refined to improve focus and alignment with the query.
Implementing mechanisms to enhance context-query alignment can further strengthen relevance, ensuring that only pertinent information is considered. Additionally, optimising context retrieval processes can help prioritise relevant details, improving overall response accuracy and coherence.
Differentiating Context Relevance with Similar Evals
- Context Adherence: It measures how well responses stay within the provided context while Context Relevance evaluates the sufficiency and appropriateness of the context.
- Completeness: Completeness evaluates if the response completely answers the query, while Context Relevance focuses on the context’s ability to support a complete response.
- Context Similarity: It compares similarity between provided and expected context, that is, it measures how closely the context matches expected information, while Context Relevance assesses if the context is sufficient and appropriate for the query.