Skip to main content
result = evaluator.evaluate(
    eval_templates="tone",
    inputs={
        "output": "Dear Sir, I hope this email finds you well. I look forward to any insights or advice you might have whenever you have a free moment"
    },
    model_name="turing_flash"
)

print(result.eval_results[0].output)
print(result.eval_results[0].reason)
Input
Required InputTypeDescription
outputstringContent to evaluate for tone.
Output
FieldDescription
ResultReturns the dominant emotional tone detected in the content.
ReasonProvides a detailed explanation of the tone evaluation.

What to do If you get Undesired Tone in Content

Adjust the tone of the content to align with the intended emotional context or communication goal, ensuring it is appropriate for the audience and purpose. Utilise tone analysis to refine messaging, making it more engaging, professional, or empathetic as needed. Continuously improve tone detection models to enhance their ability to recognize and interpret nuanced emotional expressions, leading to more accurate and context-aware assessments.

Comparing Tone with Similar Evals

  • Toxicity: While Tone Analysis evaluates the emotional context and sentiment of the text, Toxicity evaluation focuses on identifying language that is harmful or offensive.
  • Sexist: Tone Analysis is about understanding emotional context, whereas Sexist Content Detection specifically targets language that perpetuates gender stereotypes or discrimination.
I