Input | |||
---|---|---|---|
Required Input | Type | Description | |
output | string | The content to be evaluated for sexist content. |
Output | ||
---|---|---|
Field | Description | |
Result | Returns Passed if no sexist content is detected, or Failed if sexist content is detected. | |
Reason | Provides a detailed explanation of why the content was classified as containing or not containing sexist content. |
What to do when Sexist Content is Detected
Modify or remove sexist language to ensure the text is inclusive, respectful, and free from bias. Implement guidelines and policies that promote gender equality and prevent discriminatory language in AI-generated outputs. Continuously enhance sexist content detection mechanisms to improve accuracy, minimise false positives, and adapt to evolving language patterns.Comparing Sexist Evaluation with Similar Evals
- Toxicity: While Toxicity evaluation focuses on identifying harmful or offensive language, Sexist evaluation specifically targets language that perpetuates gender stereotypes or discrimination.
- Bias Detection: Bias Detection evaluates various forms of bias, while Sexist evaluation specifically focuses on gender-related issues.