Context similarity validation refers to the ability to evaluate and validate text output from LLM models based on how similar it is to a given context. This includes using various similarity measures to assess the similarity between the generated text and the context.

Required Parameters

ParameterDescription
contextThe context provided for the response
responseThe actual response to be evaluated

Optional Configuration

ParameterDescription
comparatorThe method to use for comparison
failure_thresholdThe threshold below which the evaluation fails

Available Comparators

ComparatorDescription
Comparator.COSINEMeasures similarity based on vector angle between text embeddings
Comparator.LEVENSHTEINCalculates edit distance between strings, normalized to [0,1]
Comparator.JARO_WINKLERString similarity that favors strings matching from the beginning
Comparator.JACCARDMeasures overlap between word sets using intersection over union
Comparator.SORENSEN_DICESimilar to Jaccard but gives more weight to overlapping terms

Example Usage

from fi.evals import ContextSimilarity, EvalClient
from fi.evals.types import Comparator
from fi.testcases import LLMTestCase

# Initialize the evaluation client
evaluator = EvalClient(
    fi_api_key="your_api_key", 
    fi_secret_key="your_secret_key"
)

# Create a test case with required parameters
test_case = LLMTestCase(
    context="The Eiffel Tower is a wrought-iron lattice tower located in Paris, France.",
    response="The Eiffel Tower can be found in the city of Paris."
)

# Initialize the context similarity evaluator (with optional configuration)
context_similarity = ContextSimilarity(
    comparator=Comparator.COSINE.value,
    failure_threshold=0.8
)

# Add the evaluator to the client and run evaluation
result = evaluator.evaluate(context_similarity, test_case)
print(result) # Will return Pass if similarity score exceeds threshold