Chunk Attribution
Evaluates whether a language model references the provided context chunks at all when generating its response. This metric assesses if the output acknowledges and incorporates information from the context, indicating the model’s basic ability to leverage provided data.
Evaluation Using Interface
Input:
- Required:
- context: - The contextual information provided to the model.
- output: - The response generated by the language model.
- Optional:
- input: - The original query or instruction given to the model.
Output:
- Result: - Passed / Failed
Interpretation:
- Passed: signifies that the model acknowledged the context, which is a prerequisite for generating contextually grounded responses.
- Failed: indicates a potential issue, such as the model ignoring the context, the context being entirely irrelevant, or the prompt not adequately instructing the model to use the context. This often points to problems in the retrieval or generation step of a RAG system.
Evaluation Using Python SDK
Click here to learn how to setup evaluation using the Python SDK.
Input | Parameter | Type | Description |
---|---|---|---|
Required | context | string or list[string] | The contextual information provided to the model. |
output | string | The response generated by the language model. | |
Optional | input | string | The original query or instruction given to the model. |
Output | Type | Description |
---|---|---|
Result | string | Passed / Failed |
What to Do When Chunk Attribution Fails
- Ensure that the context provided is relevant and sufficiently detailed for the model to utilise effectively. Irrelevant context might be ignored.
- Modify the input prompt to explicitly guide the model to use the context. Clearer instructions (e.g., “Using the provided documents, answer…”) can help.
- Check the retrieval mechanism: Is the correct context being retrieved and passed to the generation model?
- If the model consistently fails to use context despite relevant information and clear prompts, it may require fine-tuning with examples that emphasize context utilization.
Differentiating Chunk Attribution with Chunk Utilization
Chunk Attribution verifies whether the model references the provided context at all, focusing on its ability to acknowledge and use relevant information. It results in a binary outcome—either the context is used (Passed) or it is not (Failed). In contrast, Chunk Utilization measures how effectively the model integrates the context into its response, assigning a score (typically 0 to 1) that reflects the degree of reliance on the provided information. While Attribution confirms if context is considered, Utilization evaluates how much of it contributes to generating a well-informed response.