Use Future AGI Models

Future AGI's proprietary models trained on a vast variety of datasets to perform evaluations.

What it is

Future AGI models are proprietary models built and optimized specifically for evaluation tasks — scoring, judging, and assessing AI outputs. They support text natively, with select models extending to images and audio. Each model offers a different trade-off between accuracy and latency, making them suited for different evaluation workloads across the platform and SDK.


Available models

  • TURING_LARGE turing_large: Flagship evaluation model that delivers best-in-class accuracy across multimodal inputs (text, images, audio). Recommended when maximal precision outweighs latency constraints.

  • TURING_SMALL turing_small: Compact variant that preserves high evaluation fidelity while lowering computational cost. Supports text and image evaluations.

  • TURING_FLASH turing_flash: Latency-optimised version of TURING, providing high-accuracy assessments for text and image inputs with fast response times.

  • PROTECT protect: Real-time guardrailing model for safety, policy compliance, and content-risk detection. Offers very low latency on text and audio streams and permits user-defined rule sets.

  • PROTECT_FLASH protect_flash: Ultra-fast binary guardrail for text content. Designed for first-pass filtering where millisecond-level turnaround is critical.


Quick comparison

ModelCodeInputsBest forLatency
TURING_LARGEturing_largeText, image, audioMax accuracy, multimodal evalsHigher
TURING_SMALLturing_smallText, imageHigh fidelity, lower costMedium
TURING_FLASHturing_flashText, imageFast, high-accuracy evalsLow
PROTECTprotectText, audioSafety, guardrails, user-defined rulesLow
PROTECT_FLASHprotect_flashTextFirst-pass binary filteringUltra-low

How to use

  • In the UI — When you add or configure an evaluation (e.g. on a dataset or in a run test), choose Use Future AGI Models and pick one of the models above from the dropdown. Use Future AGI Models in the UI

  • In the SDK — Pass the model_name (e.g. turing_small, turing_flash, protect) in your evaluate() call. See Running your first eval for setup and usage.

    from fi.evals import Evaluator
    
    evaluator = Evaluator(fi_api_key="...", fi_secret_key="...")
    result = evaluator.evaluate(
        eval_templates="tone",
        inputs={"input": "Your text to evaluate."},
        model_name="turing_small",  # or turing_flash, turing_large, protect, protect_flash
    )

What you can do next

Was this page helpful?

Questions & Discussion