Input | |||
---|---|---|---|
Required Input | Type | Description | |
expected | string | The expected content for comparison against the model generated output | |
output | string | The output generated by the model to be evaluated for fuzzy match |
Output | ||
---|---|---|
Field | Description | |
Result | Returns a score, where higher values indicate better fuzzy match | |
Reason | Provides a detailed explanation of the fuzzy match assessment |
Troubleshooting
If you encounter issues with this evaluation:- Ensure that both input texts are properly formatted and contain meaningful content
- This evaluation works best with texts that convey similar information but might have different wording
- For very short texts (1-2 words), results may be less reliable
- If you need more precise matching, consider using
levenshtein_similarity
instead
Related Evaluations
- levenshtein_similarity: Provides a more strict character-by-character comparison
- embedding_similarity: Compares semantic meaning rather than surface-level text
- semantic_list_contains: Checks if specific semantic concepts are present in both texts
- rouge_score: Evaluates based on n-gram overlap, especially useful for summarization tasks