Ensuring that a query is supported by sufficient context is critical for accurate and reliable responses. Inadequate context can lead to incomplete, ambiguous, or incorrect answers, impacting decision-making and information retrieval. This eval determines whether the provided content contains enough information to accurately address a given query. Click here to read the eval definition of Context Sufficiency

a. Using Interface

Inputs Required:
  • Query: The question or request requiring an answer.
  • Context: The background information used to generate the response.
Configuration Parameters:
  • Model: LLM model to be used here to analyze context sufficiency.
Output
  • 1: The context contains sufficient information to fully answer the query.
  • 0: The context lacks key details, requiring additional information or refinement.

b. Using SDK

from fi.evals import Evaluator
from fi.evals.templates import ContextSufficiency
from fi.testcases import TestCase

evaluator = Evaluator(
    fi_api_key="your_api_key",
    fi_secret_key="your_secret_key",
    fi_base_url="https://api.futureagi.com"
)

test_case = TestCase(
    query="What is the capital of France?",
    context="Paris has been France's capital since 987 CE."
)

template = ContextSufficiency()

response = evaluator.evaluate(eval_templates=[template], inputs=[test_case], model_name="turing_flash")

print(f"Score: {response.eval_results[0].metrics[0].value}")
print(f"Reason: {response.eval_results[0].reason}")