Set up observability
Instrument your application and send traces to an Observe project so you can monitor LLM calls, latency, and cost in one place.
About
This is how you connect your application to Future AGI so LLM calls are captured in the Observe dashboard. Register a project, instrument your app, and every request appears automatically with its inputs, outputs, cost, latency, and token usage.
When to use
- First-time setup: Get traces flowing into the Observe dashboard so you can start monitoring production LLM calls.
- Production monitoring: See latency, cost, and token usage for every LLM call in one place instead of scraping logs.
- Debugging: Tie a user report or failure to a specific trace and span so you can reproduce and fix issues.
- Baseline for other Observe features: Sessions, evals, user tracking, and alerts all require traces to be set up first.
How to
Install the packages
Install the core instrumentation package and the framework instrumentor for your LLM provider.
pip install fi-instrumentation-otel traceAI-openainpm install @traceai/fi-core @traceai/openai Configure your environment
Set environment variables so the SDK can connect to Future AGI. Get your API keys from the dashboard.
import os
os.environ["FI_API_KEY"] = "YOUR_API_KEY"
os.environ["FI_SECRET_KEY"] = "YOUR_SECRET_KEY"process.env.FI_API_KEY = FI_API_KEY;
process.env.FI_SECRET_KEY = FI_SECRET_KEY; Register your Observe project
Call register with project_type set to Observe and a project_name. Optionally set transport (e.g. GRPC or HTTP).
from fi_instrumentation import register, Transport
from fi_instrumentation.fi_types import ProjectType
trace_provider = register(
project_type=ProjectType.OBSERVE,
project_name="FUTURE_AGI",
transport=Transport.GRPC,
)import { register, ProjectType } from "@traceai/fi-core";
const traceProvider = register({
project_type: ProjectType.OBSERVE,
project_name: "FUTURE_AGI"
}); Add instrumentation
Use one of two options:
- Auto Instrumentor: For supported frameworks (e.g. OpenAI). Use Future AGI’s Auto Instrumentation; recommended for most apps.
- Manual tracing: For custom spans, use OpenTelemetry. Learn more →
Example with the OpenAI instrumentor: install the package, instrument with your trace provider, then use the OpenAI client as usual. Traces appear in your Observe dashboard.
pip install traceAI-openainpm install @traceai/openai from traceai_openai import OpenAIInstrumentor
OpenAIInstrumentor().instrument(tracer_provider=trace_provider)import { OpenAIInstrumentation } from "@traceai/openai";
const openaiInstrumentation = new OpenAIInstrumentation({}); from openai import OpenAI
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"
client = OpenAI()
completion = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Write a one-sentence bedtime story about a unicorn."}]
)
print(completion.choices[0].message.content)import { OpenAI } from "openai";
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const completion = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Write a one-sentence bedtime story about a unicorn." }],
});
console.log(completion.choices[0].message.content); For supported frameworks and more options, see the Auto Instrumentation page.