This evaluation template assesses whether an AI response is concise and to the point. It evaluates the efficiency and clarity of the response, ensuring it is direct and free of unnecessary words or details.

Python SDK Usage

result = evaluator.evaluate(
    eval_templates="is_concise",
    inputs={
        "input": "Honey doesn’t spoil because its low moisture and high acidity prevent the growth of bacteria and other microbes."
    },
    model_name="turing_flash"
)

print(result.eval_results[0].output)
print(result.eval_results[0].reason)

Troubleshooting

If you encounter issues with this evaluation:
  • Remember that conciseness depends on context - what’s concise for a complex topic might still be relatively lengthy
  • This evaluation works best on complete responses rather than fragments
  • Very short responses may be marked as concise but might fail other evaluations like completeness
  • Consider the balance between conciseness and adequate information - extremely brief responses might miss important details
  • completeness: Ensures that despite being concise, the response addresses all aspects of a query
  • is-helpful: Evaluates if the response is actually useful despite its brevity
  • instruction-adherence: Checks if the response follows instructions, which might include requirements for detail
  • length-evals: Provides quantitative metrics about text length