Set Up Prototype

Configure your environment, register your prototype project, and instrument your app so traces and evals appear in the Prototype dashboard.

About

Prototype lets you run multiple versions of your AI application side by side — different prompts, models, or parameters — and compare them on real outputs before deciding what goes to production. Setting up Prototype is how you bring your application into that environment.

You register your project with a version name, instrument your application so its LLM calls are automatically traced, and optionally attach evaluations so each run is scored. From that point, every generation your app makes is captured in the Prototype dashboard under the version it belongs to, ready to compare against other versions by quality, cost, and latency.


When to use

  • First-time prototype: Get your project and version registered and start sending traces so you can compare different prompts or models.
  • Comparing versions: Use project_version_name (or equivalent) so each run is tagged and comparable in the dashboard.
  • Eval-ready setup: Register with optional eval_tags so prototype outputs are scored (e.g. tone, safety) without changing code later.
  • Framework integration: Use Auto Instrumentor for OpenAI (or manual tracing) so existing LLM calls are automatically traced.

How to

Install the packages

Install the core instrumentation package and the framework instrumentor for your LLM provider.

pip install fi-instrumentation-otel traceAI-openai
npm install @traceai/fi-core @traceai/openai

Configure your environment

Set environment variables so your app can talk to Future AGI. Get your API keys here.

import os
os.environ["FI_API_KEY"] = "YOUR_API_KEY"
os.environ["FI_SECRET_KEY"] = "YOUR_SECRET_KEY"
process.env.FI_API_KEY = "YOUR_API_KEY";
process.env.FI_SECRET_KEY = "YOUR_SECRET_KEY";

Register your prototype project

Call register() with your project name, version name (for comparing runs), and optional eval tags. Use ProjectType.EXPERIMENT for prototyping.

from fi_instrumentation import register, Transport
from fi_instrumentation.fi_types import ProjectType, EvalName, EvalTag, EvalTagType, EvalSpanKind, ModelChoices

trace_provider = register(
    project_type=ProjectType.EXPERIMENT,
    project_name="FUTURE_AGI",
    project_version_name="openai-exp",
    transport=Transport.HTTP,
    eval_tags=[
        EvalTag(
            eval_name=EvalName.TONE,
            value=EvalSpanKind.LLM,
            type=EvalTagType.OBSERVATION_SPAN,
            model=ModelChoices.TURING_LARGE,
            mapping={"input": "llm.input_messages"},
            custom_eval_name="<custom_eval_name2>",
        ),
    ],
)
import { register, Transport, ProjectType, EvalName, EvalTag, EvalTagType, EvalSpanKind, ModelChoices } from "@traceai/fi-core";

const evalTag = await EvalTag.create({
  type: EvalTagType.OBSERVATION_SPAN,
  value: EvalSpanKind.LLM,
  eval_name: EvalName.CHUNK_ATTRIBUTION,
  custom_eval_name: "Chunk_Attribution",
  mapping: { "context": "raw.input", "output": "raw.output" },
  model: ModelChoices.TURING_SMALL
});

const tracerProvider = register({
    projectName: "FUTURE_AGI",
    projectType: ProjectType.EXPERIMENT,
    transport: Transport.HTTP,
    projectVersionName: "openai-exp",
    evalTags: [evalTag]
});
Property (Python)Property (TypeScript)Description
project_typeprojectTypeUse ProjectType.EXPERIMENT for Prototype.
project_nameprojectNameYour project name.
project_version_nameprojectVersionName(optional) Version id for this prototype so you can compare runs.
eval_tagsevalTags(optional) Evals to run on prototype outputs. Learn more
transporttransport(optional) GRPC or HTTP. Defaults to HTTP.

Note

Python uses snake_case; TypeScript uses camelCase for these properties.

Instrument your project

Use one of:

  • Auto Instrumentor: Recommended; use Future AGI’s instrumentor for your framework (e.g. OpenAI).
  • Manual tracing: OpenTelemetry for custom setups.

Example: OpenAI (Auto Instrumentor): Instrument your client after registering. Traces will appear in the Prototype dashboard.

from traceai_openai import OpenAIInstrumentor
import openai

OpenAIInstrumentor().instrument(tracer_provider=trace_provider)

client = openai.OpenAI()
completion = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Write a one-sentence bedtime story about a unicorn."}]
)
print(completion.choices[0].message.content)
import { OpenAIInstrumentation } from "@traceai/openai";
import { registerInstrumentations } from "@opentelemetry/instrumentation";
import { OpenAI } from "openai";

registerInstrumentations({
  instrumentations: [new OpenAIInstrumentation({})],
  tracerProvider: tracerProvider
});

const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const completion = await client.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: "Write a one-sentence bedtime story about a unicorn." }]
});
console.log(completion.choices[0].message.content);

For more frameworks and options, see the Auto Instrumentation docs.

Optional next steps

After setting up your prototype, you can:

  • Configure evals: Define which evaluations run on your prototype outputs (EvalTag, mapping, model). Configure evals for prototype
  • Compare and choose winner: Rank versions by evals, cost, and latency, then promote the best. Choose winner

Next Steps

Was this page helpful?

Questions & Discussion