Setup Observability

Set up Future AGI Observe for production monitoring. Configure auto-instrumented tracing for OpenAI, Anthropic, LangChain, and other LLM frameworks.

What is it?

Observe is Future AGI’s observability product. It gives you full visibility into how your AI application behaves in production — capturing every LLM call, tool use, and agent decision as a trace. You can monitor performance, detect anomalies, track costs, and debug issues without changing your application logic. Observe integrates with OpenAI, Anthropic, LangChain, and other frameworks via auto-instrumentation.


Configure Your Environment

Set up your environment variables to connect to Future AGI. Get your API keys here

import os
os.environ["FI_API_KEY"] = "YOUR_API_KEY"
os.environ["FI_SECRET_KEY"] = "YOUR_SECRET_KEY"
process.env.FI_API_KEY = FI_API_KEY;
process.env.FI_SECRET_KEY = FI_SECRET_KEY;

Register Your Observe Project

Register your project with the necessary configuration.

from fi_instrumentation import register, Transport
from fi_instrumentation.fi_types import ProjectType

# Setup OTel via our register function
trace_provider = register(
    project_type=ProjectType.OBSERVE,  
    project_name="FUTURE_AGI",            # Your project name
    transport=Transport.GRPC,             # Transport mechanism for your traces
)
import { register, ProjectType } from "@traceai/fi-core";

const traceProvider = register({
    project_type: ProjectType.OBSERVE,
    project_name: "FUTURE_AGI"
});

Configuration Parameters:

  • project_type: Set as ProjectType.OBSERVE for observe
  • project_name: A descriptive name for your project
  • transport (optional): Set the transport for your traces. The available options are GRPC and HTTP.

Instrument Your Project

There are 2 ways to implement tracing in your project:

  1. Auto Instrumentor: Instrument your project with FutureAGI’s Auto Instrumentor. Recommended for most use cases.
  2. Manual Tracing: Manually track your project with Open Telemetry. Useful for more customized tracing. Learn more →

Example: Instrumenting with OpenAI

First, install the traceAI openai package:

pip install traceAI-openai
npm install @traceai/openai

Then instrument your project:

from traceai_openai import OpenAIInstrumentor

OpenAIInstrumentor().instrument(tracer_provider=trace_provider)
import { OpenAIInstrumentation } from "@traceai/openai";

const openaiInstrumentation = new OpenAIInstrumentation({});

Now use OpenAI as normal and your requests will be automatically traced:

from openai import OpenAI

os.environ["OPENAI_API_KEY"] = "your-openai-api-key"

client = OpenAI()

completion = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {
            "role": "user",
            "content": "Write a one-sentence bedtime story about a unicorn."
        }
    ]
)

print(completion.choices[0].message.content)
import { OpenAI } from "openai";

const client = new OpenAI({
    apiKey: process.env.OPENAI_API_KEY,
});

const completion = await client.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: "Write a one-sentence bedtime story about a unicorn." }],
});

console.log(completion.choices[0].message.content);

To learn more about supported frameworks and instrumentation options, visit our Auto Instrumentation documentation.

Was this page helpful?

Questions & Discussion