This evaluation template compares two texts for similarity using fuzzy matching techniques. It’s useful for detecting approximate matches between text strings when exact matching might be too strict, accounting for minor differences in wording, spelling, or formatting.

Python SDK Usage

result = evaluator.evaluate(
    eval_templates="fuzzy_match",
    inputs={
        "input": "The Eiffel Tower is a famous landmark in Paris, built in 1889 for the World's Fair. It stands 324 meters tall.",
        "output": "The Eiffel Tower, located in Paris, was built in 1889 and is 324 meters high."
    },
    model_name="turing_flash"
)

print(result.eval_results[0].output)
print(result.eval_results[0].reason)

Troubleshooting

If you encounter issues with this evaluation:
  • Ensure that both input texts are properly formatted and contain meaningful content
  • This evaluation works best with texts that convey similar information but might have different wording
  • For very short texts (1-2 words), results may be less reliable
  • If you need more precise matching, consider using levenshtein_similarity instead
  • levenshtein_similarity: Provides a more strict character-by-character comparison
  • embedding_similarity: Compares semantic meaning rather than surface-level text
  • semantic_list_contains: Checks if specific semantic concepts are present in both texts
  • rouge_score: Evaluates based on n-gram overlap, especially useful for summarization tasks