Set Up Prototype

Configure your environment, register your prototype project, and instrument your app so traces and evals appear in the Prototype dashboard.

What it is

Setting up a prototype means connecting your app to Future AGI so that LLM requests are traced and (optionally) evaluated. You configure API keys, call register() with your project name and version (so you can compare runs), then instrument your code—for example with the OpenAI Auto Instrumentor—so runs show up in the Prototype dashboard. Once set up, you can attach evals, compare versions by metrics, and promote a winner to production.


Use cases

  • First-time prototype — Get your project and version registered and start sending traces so you can compare different prompts or models.
  • Comparing versions — Use project_version_name (or equivalent) so each run is tagged and comparable in the dashboard.
  • Eval-ready setup — Register with optional eval_tags so prototype outputs are scored (e.g. tone, safety) without changing code later.
  • Framework integration — Use Auto Instrumentor for OpenAI (or manual tracing) so existing LLM calls are automatically traced.

How to

Configure your environment

Set environment variables so your app can talk to Future AGI. Get your API keys here.

import os
os.environ["FI_API_KEY"] = "YOUR_API_KEY"
os.environ["FI_SECRET_KEY"] = "YOUR_SECRET_KEY"
process.env.FI_API_KEY = "YOUR_API_KEY";
process.env.FI_SECRET_KEY = "YOUR_SECRET_KEY";

Register your prototype project

Call register() with your project name, version name (for comparing runs), and optional eval tags. Use ProjectType.EXPERIMENT for prototyping.

from fi_instrumentation import register, Transport
from fi_instrumentation.fi_types import ProjectType, EvalName, EvalTag, EvalTagType, EvalSpanKind, ModelChoices

trace_provider = register(
    project_type=ProjectType.EXPERIMENT,
    project_name="FUTURE_AGI",
    project_version_name="openai-exp",
    transport=Transport.HTTP,
    eval_tags=[
        EvalTag(
            eval_name=EvalName.TONE,
            value=EvalSpanKind.LLM,
            type=EvalTagType.OBSERVATION_SPAN,
            model=ModelChoices.TURING_LARGE,
            mapping={"input": "llm.input_messages"},
            custom_eval_name="<custom_eval_name2>",
        ),
    ],
)
import { register, Transport, ProjectType, EvalName, EvalTag, EvalTagType, EvalSpanKind, ModelChoices } from "@traceai/fi-core";

const tracerProvider = await register({
    projectName: "FUTURE_AGI",
    projectType: ProjectType.EXPERIMENT,
    transport: Transport.HTTP,
    projectVersionName: "openai-exp",
    evalTags: [
      await EvalTag.create({
        type: EvalTagType.OBSERVATION_SPAN,
        value: EvalSpanKind.LLM,
        eval_name: EvalName.CHUNK_ATTRIBUTION,
        custom_eval_name: "Chunk_Attribution",
        mapping: { "context": "raw.input", "output": "raw.output" },
        model: ModelChoices.TURING_SMALL
      })
    ]
});
Property (Python)Property (TypeScript)Description
project_typeprojectTypeUse ProjectType.EXPERIMENT for Prototype.
project_nameprojectNameYour project name.
project_version_nameprojectVersionName(optional) Version id for this prototype so you can compare runs.
eval_tagsevalTags(optional) Evals to run on prototype outputs. Learn more
transporttransport(optional) GRPC or HTTP.

Note

Python uses snake_case; TypeScript uses camelCase for these properties.

Instrument your project

Use one of:

  • Auto Instrumentor — Recommended; use Future AGI’s instrumentor for your framework (e.g. OpenAI).
  • Manual tracing — OpenTelemetry for custom setups.

Example: OpenAI (Auto Instrumentor) — Install the package, then instrument and run your client as usual. Traces will appear in the Prototype dashboard.

pip install traceAI-openai
npm install @traceai/openai
from traceai_openai import OpenAIInstrumentor
OpenAIInstrumentor().instrument(tracer_provider=trace_provider)

from openai import OpenAI
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"
client = OpenAI()
completion = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Write a one-sentence bedtime story about a unicorn."}]
)
print(completion.choices[0].message.content)
import { OpenAIInstrumentation } from "@traceai/openai";
import { registerInstrumentations } from "@opentelemetry/instrumentation";
const openaiInstrumentation = new OpenAIInstrumentation({});
registerInstrumentations({ instrumentations: [openaiInstrumentation], tracerProvider: tracerProvider });

import { OpenAI } from "openai";
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const completion = await client.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: "Write a one-sentence bedtime story about a unicorn." }]
});
console.log(completion.choices[0].message.content);

For more frameworks and options, see the Auto Instrumentation docs.

Optional next steps

After setting up your prototype, you can:

  • Configure evals — Define which evaluations run on your prototype outputs (EvalTag, mapping, model). Configure evals for prototype
  • Compare and choose winner — Rank versions by evals, cost, and latency, then promote the best. Choose winner

What you can do next

Was this page helpful?

Questions & Discussion