Numeric Similarity
Extracts numeric values from generated output and compute absolute or normalised difference between numeric value in reference
result = evaluator.evaluate(
eval_templates="numeric_similarity",
inputs={
"expected": "The Eiffel Tower is a famous landmark in Paris, built in 1889 for the World's Fair. It stands 324 meters tall.",
"output": "The Eiffel Tower, located in Paris, was built in 1889 and is 324 meters high."
},
model_name="turing_flash"
)
print(result.eval_results[0].output)
print(result.eval_results[0].reason)import { Evaluator, Templates } from "@future-agi/ai-evaluation";
const evaluator = new Evaluator();
const result = await evaluator.evaluate(
"numeric_similarity",
{
expected: "The Eiffel Tower is a famous landmark in Paris, built in 1889 for the World's Fair. It stands 324 meters tall.",
output: "The Eiffel Tower, located in Paris, was built in 1889 and is 324 meters high."
},
{
modelName: "turing_flash",
}
);
console.log(result); | Input | |||
|---|---|---|---|
| Required Input | Type | Description | |
expected | string | Reference content with the expected numeric value. | |
output | string | Model-generated content containing the numeric prediction. |
| Output | ||
|---|---|---|
| Field | Description | |
| Result | Returns a score representing the normalized difference between the numeric values. | |
| Reason | Provides a detailed explanation of the numeric similarity assessment. |
Purpose of Numeric Similarity Eval
- It evaluate the accuracy of numerical values in model-generated outputs.
- Unlike semantic or lexical metrics which can overlook numeric discrepancies,
Numeric Similarityensures that numeric correctness is measured explicitly.
Was this page helpful?