Skip to main content
result = evaluator.evaluate(
    eval_templates="fuzzy_match",
    inputs={
        "expected": "The Eiffel Tower is a famous landmark in Paris, built in 1889 for the World's Fair. It stands 324 meters tall.",
        "output": "The Eiffel Tower, located in Paris, was built in 1889 and is 324 meters high."
    },
    model_name="turing_flash"
)

print(result.eval_results[0].output)
print(result.eval_results[0].reason)
Input
Required InputTypeDescription
expectedstringThe expected content for comparison against the model generated output
outputstringThe output generated by the model to be evaluated for fuzzy match
Output
FieldDescription
ResultReturns a score, where higher values indicate better fuzzy match
ReasonProvides a detailed explanation of the fuzzy match assessment

Troubleshooting

If you encounter issues with this evaluation:
  • Ensure that both input texts are properly formatted and contain meaningful content
  • This evaluation works best with texts that convey similar information but might have different wording
  • For very short texts (1-2 words), results may be less reliable
  • If you need more precise matching, consider using levenshtein_similarity instead
  • levenshtein_similarity: Provides a more strict character-by-character comparison
  • embedding_similarity: Compares semantic meaning rather than surface-level text
  • semantic_list_contains: Checks if specific semantic concepts are present in both texts
  • rouge_score: Evaluates based on n-gram overlap, especially useful for summarization tasks
I