Skip to main content
result = evaluator.evaluate(
    eval_templates="is_concise",
    inputs={
        "output": "Honey doesn't spoil because its low moisture and high acidity prevent the growth of bacteria and other microbes."
    },
    model_name="turing_flash"
)

print(result.eval_results[0].output)
print(result.eval_results[0].reason)
Input
Required InputTypeDescription
outputstringGenerated content by the model to be evaluated for conciseness
Output
FieldDescription
ResultReturns Passed if the content is concise, or Failed if it’s not
ReasonProvides a detailed explanation of the evaluation

Troubleshooting

If you encounter issues with this evaluation:
  • Remember that conciseness depends on context - what’s concise for a complex topic might still be relatively lengthy
  • This evaluation works best on complete responses rather than fragments
  • Very short responses may be marked as concise but might fail other evaluations like completeness
  • Consider the balance between conciseness and adequate information - extremely brief responses might miss important details
  • completeness: Ensures that despite being concise, the response addresses all aspects of a query
  • is-helpful: Evaluates if the response is actually useful despite its brevity
  • instruction-adherence: Checks if the response follows instructions, which might include requirements for detail
  • length-evals: Provides quantitative metrics about text length
I