Setup Observability

Set up Future AGI Observe for production monitoring. Configure auto-instrumented tracing for OpenAI, Anthropic, LangChain, and other LLM frameworks.

About

Observe is Future AGI’s observability product. It gives you full visibility into how your AI application behaves in production by capturing every LLM call, tool use, and agent decision as a trace. You can monitor performance, detect anomalies, track costs, and debug issues without changing your application logic.

Observe supports auto-instrumentation for OpenAI, Anthropic, LangChain, LlamaIndex, CrewAI and 30+ other frameworks. By the end of this guide, you’ll have traces flowing into your Future AGI dashboard.


Install the SDK

Install the Future AGI instrumentation package and the OpenAI integration (used in this example).

pip install fi-instrumentation traceAI-openai openai
npm install @traceai/fi-core @traceai/openai openai

Configure Your Environment

Set up your environment variables to connect to Future AGI. Get your API keys here.

import os
os.environ["FI_API_KEY"] = "YOUR_API_KEY"
os.environ["FI_SECRET_KEY"] = "YOUR_SECRET_KEY"
os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"
process.env.FI_API_KEY = "YOUR_API_KEY";
process.env.FI_SECRET_KEY = "YOUR_SECRET_KEY";
process.env.OPENAI_API_KEY = "YOUR_OPENAI_API_KEY";

Register Your Observe Project

Register your project with the necessary configuration.

from fi_instrumentation import register, Transport
from fi_instrumentation.fi_types import ProjectType

trace_provider = register(
    project_type=ProjectType.OBSERVE,
    project_name="my-llm-app",
    transport=Transport.GRPC,
)
import { register, ProjectType } from "@traceai/fi-core";

const traceProvider = register({
    project_type: ProjectType.OBSERVE,
    project_name: "my-llm-app",
});

Configuration Parameters:

  • project_type: Set as ProjectType.OBSERVE for observe
  • project_name: A descriptive name for your project
  • transport (optional): Set the transport for your traces. The available options are GRPC and HTTP.

Instrument and Run

There are 2 ways to implement tracing in your project:

  1. Auto Instrumentor: Automatically captures all LLM calls. Recommended for most use cases.
  2. Manual Tracing: Gives you full control over what gets traced using OpenTelemetry. Learn more

Here’s a complete example using auto-instrumentation with OpenAI:

from traceai_openai import OpenAIInstrumentor
from openai import OpenAI

# Enable auto-instrumentation
OpenAIInstrumentor().instrument(tracer_provider=trace_provider)

# Use OpenAI as normal
client = OpenAI()

completion = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {
            "role": "user",
            "content": "Write a one-sentence bedtime story about a unicorn."
        }
    ]
)

print(completion.choices[0].message.content)
import { OpenAIInstrumentation } from "@traceai/openai";
import { OpenAI } from "openai";

// Enable auto-instrumentation
const openaiInstrumentation = new OpenAIInstrumentation({
    tracerProvider: traceProvider,
});

// Use OpenAI as normal
const client = new OpenAI();

const completion = await client.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: "Write a one-sentence bedtime story about a unicorn." }],
});

console.log(completion.choices[0].message.content);

View Your Traces

Open your Future AGI dashboard and navigate to the Observe tab. You should see your project listed with the trace from the OpenAI call above.

Each trace shows the full request and response, latency, token usage, and cost. From here you can set up alerts, track sessions, and add inline evaluations.

Next Steps

Was this page helpful?

Questions & Discussion