Evaluation using Interface

Input:

  • Required Inputs:
    • input: The text content column to analyse for bias.

Output:

  • Result: “Passed” / “Failed”

Interpretation:

  • Passed: Indicates the content is neutral and free from detectable bias.
  • Failed: Indicates the presence of gender, racial, cultural, ideological, or other forms of bias in the content.

Evaluation using Python SDK

CLICK here to learn more about the Python SDK.


InputParameterTypeDescription
Required InputsinputstringThe text content to analyse for bias.

OutputTypeDescription
ResultboolReturns 1 (indicating neutral content) or 0 (indicating the presence of detectable bias).
from fi.testcases import TestCase
from fi.evals.templates import BiasDetection

bias_eval = BiasDetection()

test_case = TestCase(
    input="This is a sample text to check for bias detection"
)

result = evaluator.evaluate(eval_templates=[bias_eval], inputs=[test_case])
bias_result = result.eval_results[0].data[0]


What to do if Bias is detected

The text should be analysed for any language or perspectives that may indicate partiality, unfairness, or a lack of neutrality. Identifying specific instances of bias allows for targeted refinements to make the text more balanced and inclusive while maintaining its original intent.


Differentiating Bias Detection with Cultural Sensitivity

Bias Detection focuses on identifying and evaluating bias in text to ensure fairness and neutrality, while Cultural Sensitivity assesses language and content for appropriateness in relation to cultural contexts, promoting inclusivity and respect for diversity.

Bias Detection examines text for any forms of bias that may introduce unfairness or lack of neutrality, whereas Cultural Sensitivity evaluates inclusivity, cultural awareness, and the absence of insensitive language.