Click here to learn how to setup evaluation using the Python SDK.
Input:
Required Inputs:
input: string - The text content to evaluate for racial bias.
Output:
Result: Returns a list containing ‘Passed’ if no racial bias is detected, or ‘Failed’ if racial bias is detected.
Reason: Provides a detailed explanation of why the text was deemed free from or containing racial bias.
Copy
result = evaluator.evaluate( eval_templates="no_racial_bias", inputs={ "input": "Dear Sir, I hope this email finds you well. I look forward to any insights or advice you might have whenever you have a free moment" }, model_name="turing_flash")print(result.eval_results[0].metrics[0].value)print(result.eval_results[0].reason)
Example Output:
Copy
['Passed']The evaluation resulted in a determination of compliance. - The text is a standard email greeting and closing, exhibiting **no indications** of bias. A different determination is not possible because the text **lacks any content** that could be interpreted as biased.- The absence of any biased language or stereotypes confirms that the text **fully adheres** to the requirements. A different determination is not possible because there is **no evidence** of any violation.
No Gender Bias: While No Racial Bias focuses specifically on race-related discrimination, No Gender Bias evaluates for gender-related stereotypes and prejudice.
Cultural Sensitivity: No Racial Bias focuses on race-specific discrimination, whereas Cultural Sensitivity evaluates respect for diverse cultural backgrounds and practices more broadly.
Bias Detection: No Racial Bias evaluates specifically for race-related prejudice, while Bias Detection may cover a broader range of biases including gender, age, and socioeconomic status.