Context sufficiency evaluation assesses whether the provided context contains enough information to adequately answer a given query. This evaluation is crucial for ensuring that your system has retrieved sufficient relevant information before generating a response.

Configuration

The evaluation requires the following configuration:

ParameterDescription
modelThe model to be used for evaluation
from fi.evals import ContextSufficiency

context_eval = ContextSufficiency(config={"model": "gpt-4o-mini"})

Test Case Setup

The evaluation requires both the query and the context that should contain sufficient information to answer it:

from fi.testcases import LLMTestCase

test_case = LLMTestCase(
    query="What is the capital of France?",
    context="Paris is the capital city of France. It is located in the northern part of the country."
)

Client Setup

Initialize the evaluation client with your API credentials:

from fi.evals import EvalClient

evaluator = EvalClient(
    fi_api_key="your_api_key", 
    fi_secret_key="your_secret_key"
)

Complete Example

from fi.evals import ContextSufficiency, EvalClient
from fi.testcases import LLMTestCase

# Initialize the context sufficiency evaluator
context_eval = ContextSufficiency(config={"model": "gpt-4o-mini"})

# Create a test case
test_case = LLMTestCase(
    query="What is the capital of France?",
    context="Paris is the capital city of France. It is located in the northern part of the country."
)

# Run the evaluation
evaluator = EvalClient(fi_api_key="your_api_key", fi_secret_key="your_secret_key")
result = evaluator.evaluate(context_eval, test_case)
print(result)  # Will return Pass if context contains enough information to answer the query