Set up observability

Instrument your application and send traces to an Observe project so you can monitor LLM calls, latency, and cost in one place.

What it is

Set up observability is the feature that connects your application to Observe. You configure API credentials, register an Observe project (name and type), and add instrumentation—either via the auto-instrumentor for supported SDKs (e.g. OpenAI) or via OpenTelemetry for custom tracing. Once configured, your LLM requests are sent as traces to Future AGI and appear in the Observe dashboard so you can inspect runs, attach evals, and set alerts.

Use cases

  • Production monitoring — See latency, cost, and token usage for LLM calls in one place instead of scraping logs.
  • Debugging — Tie a user report or failure to a specific trace and span so you can reproduce and fix issues.
  • Quality and evals — Once traces are flowing, attach evaluations (e.g. hallucination, bias) and run them on historic or continuous data.
  • Sessions and alerts — Use the same Observe project for session grouping and threshold-based alerts.
  • First step — Setting up observability is the baseline before using other Observe features (sessions, evals, user dashboard, voice).

How to

Configure your environment

Set environment variables so the SDK can connect to Future AGI. Get your API keys from the dashboard.

import os
os.environ["FI_API_KEY"] = "YOUR_API_KEY"
os.environ["FI_SECRET_KEY"] = "YOUR_SECRET_KEY"
process.env.FI_API_KEY = FI_API_KEY;
process.env.FI_SECRET_KEY = FI_SECRET_KEY;

Register your Observe project

Call register with project_type set to Observe and a project_name. Optionally set transport (e.g. GRPC or HTTP).

from fi_instrumentation import register, Transport
from fi_instrumentation.fi_types import ProjectType

trace_provider = register(
    project_type=ProjectType.OBSERVE,
    project_name="FUTURE_AGI",
    transport=Transport.GRPC,
)
import { register, ProjectType } from "@traceai/fi-core";

const traceProvider = register({
    project_type: ProjectType.OBSERVE,
    project_name: "FUTURE_AGI"
});

Add instrumentation

Use one of two options:

Example with the OpenAI instrumentor: install the package, instrument with your trace provider, then use the OpenAI client as usual. Traces appear in your Observe dashboard.

pip install traceAI-openai
npm install @traceai/openai
from traceai_openai import OpenAIInstrumentor

OpenAIInstrumentor().instrument(tracer_provider=trace_provider)
import { OpenAIInstrumentation } from "@traceai/openai";

const openaiInstrumentation = new OpenAIInstrumentation({});
from openai import OpenAI

os.environ["OPENAI_API_KEY"] = "your-openai-api-key"
client = OpenAI()

completion = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Write a one-sentence bedtime story about a unicorn."}]
)
print(completion.choices[0].message.content)
import { OpenAI } from "openai";

const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const completion = await client.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: "Write a one-sentence bedtime story about a unicorn." }],
});
console.log(completion.choices[0].message.content);

For supported frameworks and more options, see the Auto Instrumentation page.


What you can do next

Was this page helpful?

Questions & Discussion