Toxicity assesses the content for harmful or toxic language. This evaluation is crucial for ensuring that text does not contain language that could be offensive, abusive, or harmful to individuals or groups.
Click here to learn how to setup evaluation using the Python SDK.Input:
string
- The output column generated by the model.bool
- 0/1