Use Future AGI Models
Future AGI's proprietary models trained on a vast variety of datasets to perform evaluations.
What it is
Future AGI models are proprietary models built and optimized specifically for evaluation tasks — scoring, judging, and assessing AI outputs. They support text natively, with select models extending to images and audio. Each model offers a different trade-off between accuracy and latency, making them suited for different evaluation workloads across the platform and SDK.
Available models
-
TURING_LARGE
turing_large: Flagship evaluation model that delivers best-in-class accuracy across multimodal inputs (text, images, audio). Recommended when maximal precision outweighs latency constraints. -
TURING_SMALL
turing_small: Compact variant that preserves high evaluation fidelity while lowering computational cost. Supports text and image evaluations. -
TURING_FLASH
turing_flash: Latency-optimised version of TURING, providing high-accuracy assessments for text and image inputs with fast response times. -
PROTECT
protect: Real-time guardrailing model for safety, policy compliance, and content-risk detection. Offers very low latency on text and audio streams and permits user-defined rule sets. -
PROTECT_FLASH
protect_flash: Ultra-fast binary guardrail for text content. Designed for first-pass filtering where millisecond-level turnaround is critical.
Quick comparison
| Model | Code | Inputs | Best for | Latency |
|---|---|---|---|---|
| TURING_LARGE | turing_large | Text, image, audio | Max accuracy, multimodal evals | Higher |
| TURING_SMALL | turing_small | Text, image | High fidelity, lower cost | Medium |
| TURING_FLASH | turing_flash | Text, image | Fast, high-accuracy evals | Low |
| PROTECT | protect | Text, audio | Safety, guardrails, user-defined rules | Low |
| PROTECT_FLASH | protect_flash | Text | First-pass binary filtering | Ultra-low |
How to use
-
In the UI — When you add or configure an evaluation (e.g. on a dataset or in a run test), choose Use Future AGI Models and pick one of the models above from the dropdown.

-
In the SDK — Pass the model_name (e.g.
turing_small,turing_flash,protect) in yourevaluate()call. See Running your first eval for setup and usage.from fi.evals import Evaluator evaluator = Evaluator(fi_api_key="...", fi_secret_key="...") result = evaluator.evaluate( eval_templates="tone", inputs={"input": "Your text to evaluate."}, model_name="turing_small", # or turing_flash, turing_large, protect, protect_flash )
What you can do next
Evaluate via Platform & SDK
Run evals from the UI or SDK.
Create custom evals
Define your own eval rules and choose a model to run them.
Eval groups
Run multiple evals together as a group.
Use custom models
Bring your own model for evaluations.
CI/CD pipeline
Run evals automatically in your pipeline.
Evaluation overview
How evaluation fits into the platform.