As AI models become increasingly integrated into everyday applications, ensuring their outputs are safe, inclusive, and aligned with ethical standards is essential. AI-generated content must avoid explicit harm while fostering trust, fairness, and cultural awareness to remain suitable for diverse audiences.Safety in AI extends beyond preventing offensive content, it encompasses bias mitigation, respect for societal norms, and adherence to ethical and legal standards. A well-moderated AI system enhances user confidence, minimising risks of discrimination, misinformation, or inappropriate language, while ensuring AI remains reliable, socially responsible, and widely accessible.To achieve this, the following evaluations systematically assess AI-generated content. These assessments create a structured framework to ensure that AI-driven communication is ethical, respectful, and suitable for all users:
Export your API key and Secret key into your environment variables.
Copy
result = evaluator.evaluate( eval_templates="tone", inputs={ "input": "Dear Sir, I hope this email finds you well. I look forward to any insights or advice you might have whenever you have a free moment" }, model_name="turing_flash")print(result.eval_results[0].output)print(result.eval_results[0].reason)
Identifies content with gender bias or sexist language. Checks for use of stereotypes or discriminatory language or the content has imbalanced representation or assumptions based on gender.Click here to read the eval definition of Sexist
Input: The text content to check for sexist content
Output: Returns either “Passed” or “Failed”, where “Passed” indicates no sexist content detected, “Failed” indicates presence of gender bias or discriminatory language
result = evaluator.evaluate( eval_templates="sexist", inputs={ "input": "Dear Sir, I hope this email finds you well. I look forward to any insights or advice you might have whenever you have a free moment" }, model_name="turing_flash")print(result.eval_results[0].output)print(result.eval_results[0].reason)
Evaluates content for toxic, harmful, or aggressive language. Such as use of profanity, threats, or abusive language. Content that could harm user relationships or escalate conflicts.Click here to read the eval definition of Toxicity
result = evaluator.evaluate( eval_templates="toxicity", inputs={ "input": "Dear Sir, I hope this email finds you well. I look forward to any insights or advice you might have whenever you have a free moment" }, model_name="turing_flash")print(result.eval_results[0].output)print(result.eval_results[0].reason)
result = evaluator.evaluate( eval_templates="content_moderation", inputs={ "text": "I want to hurt someone who made me angry today." })print(result.eval_results[0].output)print(result.eval_results[0].reason)
Identifies biases in the output, including gender, racial, cultural, or ideological biases. An ideal AI generated response must be neutral language use without favouring or discriminating against any group.Click here to read the eval definition of Bias Detection
result = evaluator.evaluate( eval_templates="bias_detection", inputs={ "input": "Dear Sir, I hope this email finds you well. I look forward to any insights or advice you might have whenever you have a free moment" }, model_name="turing_flash")print(result.eval_results[0].output)print(result.eval_results[0].reason)
Input: The text content to analyse for cultural appropriateness
Output: Returns either “Passed” or “Failed”, where “Passed” indicates culturally appropriate content, “Failed” indicates potential cultural insensitivity
result = evaluator.evaluate( eval_templates="cultural_sensitivity", inputs={ "input": "Dear Sir, I hope this email finds you well. I look forward to any insights or advice you might have whenever you have a free moment" }, model_name="turing_flash")print(result.eval_results[0].output)print(result.eval_results[0].reason)
By integrating these evaluation methods, AI systems can consistently produce responsible, reliable, and socially aware outputs that enhance user trust and engagement.