Chunk Attribution
Evaluates whether a language model references the provided context chunks at all when generating its response. This metric assesses if the output acknowledges and incorporates information from the context, indicating the model's basic ability to leverage provided data.
result = evaluator.evaluate(
eval_templates="chunk_attribution",
inputs={
"output": "Paris is the capital city of France. It is a major European city and a global center for art, fashion, and culture.",
"context": [
"Paris is the capital and largest city of France.",
"France is a country in Western Europe.",
"Paris is known for its art museums and fashion districts."
]
},
model_name="turing_flash"
)
print(result.eval_results[0].output)
print(result.eval_results[0].reason)import { Evaluator, Templates } from "@future-agi/ai-evaluation";
const evaluator = new Evaluator();
const result = await evaluator.evaluate(
"chunk_attribution",
{
output: "Paris is the capital city of France. It is a major European city and a global center for art, fashion, and culture.",
context: [
"Paris is the capital and largest city of France.",
"France is a country in Western Europe.",
"Paris is known for its art museums and fashion districts."
]
},
{
modelName: "turing_flash",
}
);
console.log(result); | Input | |||
|---|---|---|---|
| Required Input | Type | Description | |
context | string or list[string] | The contextual information provided to the model | |
output | string | The response generated by the language model |
| Output | ||
|---|---|---|
| Field | Description | |
| Result | Returns Passed or Failed, where Passed indicates the model acknowledged the context and Failed indicates potential issues | |
| Reason | Provides a detailed explanation of the evaluation |
What to Do When Chunk Attribution Fails
- Ensure that the context provided is relevant and sufficiently detailed for the model to utilise effectively. Irrelevant context might be ignored.
- Modify the input prompt to explicitly guide the model to use the context. Clearer instructions (e.g., “Using the provided documents, answer…”) can help.
- Check the retrieval mechanism: Is the correct context being retrieved and passed to the generation model?
- If the model consistently fails to use context despite relevant information and clear prompts, it may require fine-tuning with examples that emphasize context utilization.
Differentiating Chunk Attribution with Chunk Utilization
Chunk Attribution verifies whether the model references the provided context at all, focusing on its ability to acknowledge and use relevant information. It results in a binary outcome either the context is used (Passed) or it is not (Failed). In contrast, Chunk Utilization measures how effectively the model integrates the context into its response, assigning a score (typically 0 to 1) that reflects the degree of reliance on the provided information. While Attribution confirms if context is considered, Utilization evaluates how much of it contributes to generating a well-informed response.