FAQs

Find answers to common questions about Future AGI products.

About

Answers to common questions about the Future AGI platform. If you can’t find what you’re looking for, reach out via support.


General

What is Future AGI?

Future AGI is an AI lifecycle platform that helps teams build, evaluate, monitor, and improve AI applications. It covers evaluation, observability, simulation, optimization, prompt management, safety guardrails, and an AI gateway.

How do I get started?

Start with the Installation page to set up the SDK, then follow one of the Quickstart guides to get your first integration running.

What languages and SDKs are supported?

Future AGI provides Python and TypeScript SDKs. The Prism AI Gateway also supports direct REST API calls via cURL or any HTTP client.


Evaluation

What types of evaluations can I perform?

Future AGI has 70+ built-in evaluation templates covering quality, safety, factuality, RAG retrieval, format, bias, audio, and image evaluation. You can also create custom evaluations. See Built-in Evals for the full list.

How do I run my first evaluation?

See Evaluate via Platform & SDK for step-by-step instructions using the UI or Python SDK.

How do I evaluate RAG applications?

Use retrieval-specific evals like context_adherence, chunk_attribution, and recall_score. See the RAG Evaluation cookbook for a walkthrough.


Dataset

How can I import data?

Data can be added manually, via file upload, SDK, or imported from Hugging Face. See Create New Dataset.

What are dynamic columns?

Dynamic columns generate data automatically by running prompts, evaluations, API calls, or code against your dataset rows. See Dynamic Columns.

Can I generate synthetic data?

Yes. Define a schema (columns, types, constraints) and the platform generates realistic rows. See Synthetic Data.


Simulation

What is Simulation?

Simulation lets you test voice and chat AI agents against simulated customers in controlled scenarios before going live. See Simulation Overview.

How do I run a voice simulation?

Create an agent definition, scenarios, and personas, then run a test from the platform. See Run Voice Simulation.

Can I run chat simulations from code?

Yes, using the Python SDK. See Chat Simulation Using SDK.


Annotations

What are annotations?

Annotations are human labels applied to AI outputs (traces, spans, sessions, dataset rows). Use them for quality control, fine-tuning data, and safety review. See Annotations Overview.

What’s the difference between inline and queue-based annotations?

Inline annotations are quick, ad-hoc labels from detail views. Queue-based annotations use managed campaigns with assignment, progress tracking, and agreement metrics. See Inline Annotations.


Prompt Workbench

How can the Prompt Workbench help me?

The Workbench is where you create, version, test, and manage prompts. You can build from scratch, use templates, or generate with AI. See Prompt Overview.

How do I version and deploy prompts?

Every edit creates a new version. Assign labels (Production, Staging) to versions and fetch them at runtime via the SDK. See Versions and Labels.


Prototype

What is Prototype?

Prototype is a pre-production testing environment. You run multiple versions of your application side by side and compare eval scores, cost, and latency. See Prototype Overview.

How do I choose a winning version?

Use the Choose Winner flow to weight metrics and rank versions. See Choose Winner.


Optimization

How does optimization work?

Optimization takes a prompt, runs it against your data, scores the outputs with evaluations, and iteratively generates better versions using algorithms like Bayesian Search, Meta-Prompt, ProTeGi, GEPA, PromptWizard, or Random Search. See Optimization Overview.

Can I optimize from the UI without code?

Yes. See Using the Platform.


Observability

What can I monitor with Observe?

Observe captures every LLM call, tool use, and agent decision as a trace. You can monitor latency, cost, token usage, and evaluation results. See Setup Observability.

How do I set up alerts?

Configure alerts to notify you about anomalies based on defined thresholds. See Alerts & Monitors.


Protect

What does Protect guard against?

Protect screens inputs and outputs in real time across four dimensions: Content Moderation, Bias Detection, Security (prompt injection), and Data Privacy Compliance. See Protect Overview.

Can I use Protect with text, images, and audio?

Yes. Protect works across all three modalities. See Run Protect via SDK.


Prism AI Gateway

What is Prism?

Prism is Future AGI’s AI Gateway. It sits between your application and 100+ LLM providers, handling routing, guardrails, caching, cost tracking, and observability through a single API. See Prism Overview.

Do I need to change my code to use Prism?

No. If you use the OpenAI SDK, just change base_url to https://gateway.futureagi.com and swap your API key. See the Prism Quickstart.

Can I self-host Prism?

Yes. See Self-Hosted Deployment.


Error Feed

What is Error Feed?

Error Feed automatically analyzes traces from your Observe projects, identifies agent errors, groups them into clusters, and provides fix recommendations. No configuration needed. See Error Feed Overview.


Knowledge Base

How do I add documents to a Knowledge Base?

Upload files via the UI or programmatically via the SDK.

What file types are supported?

PDF, DOCX, DOC, TXT, and RTF. Maximum 5MB per file. See Understanding Knowledge Base.


Admin & Settings

Where do I find my API keys?

Go to Settings > API Keys. See API Keys.

How do I manage team members?

See User Management and Roles & Permissions.

How do I set up billing?

See Billing & Pricing.


Troubleshooting

My traces aren’t appearing in Observe.

Check that FI_API_KEY and FI_SECRET_KEY are set correctly. Verify the instrumentor is initialized before your first LLM call. See Setup Observability.

Evaluations are failing with “model_name required”.

Some built-in evaluations require a judge model. Pass model_name="turing_flash" (or another judge model) in your evaluate call. See Judge Models.

I can’t find my API keys.

Go to Settings > API Keys. You need the Owner role. See API Keys.

Was this page helpful?

Questions & Discussion