Skip to main content
result = evaluator.evaluate(
    eval_templates="no_openai_reference",
    inputs={
        "output": "Dear Sir, I hope this email finds you well. I look forward to any insights or advice you might have whenever you have a free moment"
    },
    model_name="turing_flash"
)

print(result.eval_results[0].output)
print(result.eval_results[0].reason)
Input
Required InputTypeDescription
outputstringContent to evaluate for LLM reference.
Output
FieldDescription
ResultReturns Passed if no LLM reference is detected in the model’s output, or Failed if LLM reference is detected in the model’s output.
ReasonProvides a detailed explanation of why the content was classified as containing or not containing LLM reference.

Troubleshooting

If you encounter issues with this evaluation:
  • This evaluation detects both explicit mentions (“OpenAI”, “ChatGPT”) and implicit self-identification (“As an AI language model…”)
  • It covers references to OpenAI as a company, its products, and its models
  • If your content legitimately needs to discuss OpenAI as a subject matter, consider using a different evaluation
  • For comprehensive brand compliance, combine with other brand-specific evaluations
I