Evaluation Using Interface

Input:

  • Required Inputs:
    • output: The output column generated by model.

Output:

  • Result: Returns category such as “neutral”, “joy”, etc that indicates the dominant emotional tone detected in the content

Evaluation Using Python SDK

Click here to learn how to setup evaluation using the Python SDK.

Input:

  • Required Inputs:
    • output: string - The output column generated by the model.

Output:

  • Result: string - Returns category such as “neutral”, “joy”, etc that indicates the dominant emotional tone detected in the content
from fi.testcases import TestCase
from fi.evals.templates import Tone

tone_eval = Tone()

test_case = TestCase(
    input="Dear Sir, I hope this email finds you well. I look forward to any insights or advice you might have whenever you have a free moment"
)

result = evaluator.evaluate(eval_templates=[tone_eval], inputs=[test_case])
tone_result = result.eval_results[0].data[0]


What to do If you get Undesired Tone in Content

Adjust the tone of the content to align with the intended emotional context or communication goal, ensuring it is appropriate for the audience and purpose.

Utilise tone analysis to refine messaging, making it more engaging, professional, or empathetic as needed. Continuously improve tone detection models to enhance their ability to recognize and interpret nuanced emotional expressions, leading to more accurate and context-aware assessments.


Comparing Tone with Similar Evals

  • Toxicity: While Tone Analysis evaluates the emotional context and sentiment of the text, Toxicity evaluation focuses on identifying language that is harmful or offensive.
  • Sexist: Tone Analysis is about understanding emotional context, whereas Sexist Content Detection specifically targets language that perpetuates gender stereotypes or discrimination.