Evaluation Using Interface
Input:- Required Inputs:
- output: The output column generated by model.
- Result: Returns category such as “neutral”, “joy”, etc that indicates the dominant emotional tone detected in the content
Evaluation Using SDK
Click here to learn how to setup evaluation using SDK.Input:
- Required Inputs:
- output:
string
- The output column generated by the model.
- output:
- Result:
string
- Returns category such as “neutral”, “joy”, etc that indicates the dominant emotional tone detected in the content
What to do If you get Undesired Tone in Content
Adjust the tone of the content to align with the intended emotional context or communication goal, ensuring it is appropriate for the audience and purpose. Utilise tone analysis to refine messaging, making it more engaging, professional, or empathetic as needed. Continuously improve tone detection models to enhance their ability to recognize and interpret nuanced emotional expressions, leading to more accurate and context-aware assessments.Comparing Tone with Similar Evals
- Toxicity: While Tone Analysis evaluates the emotional context and sentiment of the text, Toxicity evaluation focuses on identifying language that is harmful or offensive.
- Sexist: Tone Analysis is about understanding emotional context, whereas Sexist Content Detection specifically targets language that perpetuates gender stereotypes or discrimination.