Input | ||
---|---|---|
Required Input | Description | |
expected | Reference content for comparison against the model generated output | |
output | Model-generated output to be evaluated for embedding similarity |
Output | ||
---|---|---|
Field | Description | |
Result | Returns score, where higher score indicates stronger similarity | |
Reason | Provides a detailed explanation of the embedding similarity assessment |
About Embedding Similarity
It evaluates how similar two texts are in meaning by comparing their vector embeddings using distance-based similarity measures. Traditional metrics like BLEU or ROUGE rely on word overlap and can fail when the generated output is a valid paraphrase with no lexical match.How Similarity Is Calculated?
Once both texts are encoded into a high-dimensional vector representations, the similarity between the two vectorsu
and v
is computed using one of the following methods:
- Cosine Similarity: Measures the cosine of the angle between vectors.
- Euclidean Distance: Measures the straight-line distance between vectors (L2 Norm).
- Manhattan Distance: Measures sum of absolute differences between vectors (L1 Norm).