Instrument with traceAI Helpers

Future AGI's traceAI library offers convenient abstractions to streamline your manual instrumentation process.

What it is

Instrument with traceAI Helpers gives you a set of Python and JS/TS utilities that sit on top of OpenTelemetry and make manual instrumentation faster and more expressive. Instead of writing raw OTEL boilerplate, you use FITracer — a Future AGI wrapper around the standard tracer — to decorate functions or wrap code blocks as typed spans (chain, agent, tool, LLM, retriever). Spans automatically capture inputs, outputs, and status; span kinds control how they render in the Future AGI UI.

Use cases

  • Function-level tracing — Decorate a function with @tracer.chain, @tracer.agent, or @tracer.tool and the entire call is captured as a span with automatic input/output.
  • Code block tracing — Wrap any code segment with tracer.start_as_current_span for precise control over what gets captured without instrumenting a whole function.
  • Typed spans — Use FI Span Kinds (chain, agent, tool, llm, retriever) so spans are rendered with the right icon and label in the dashboard.
  • Tool metadata — Attach tool name, description, and parameters to tool spans so the dashboard shows full tool call context.
  • Mixed workflows — Combine decorators (for complete functions) and context managers (for sub-operations) in the same codebase.

How to

Install the instrumentation package

pip install fi-instrumentation-otel
npm install @traceai/fi-core

Set up your tracer

Register your project and initialize a FITracer from the returned provider.

from fi_instrumentation import register, FITracer
from fi_instrumentation.fi_types import ProjectType

# Setup OTel via our register function
trace_provider = register(
    project_type=ProjectType.EXPERIMENT,
    project_name="FUTURE_AGI",
    project_version_name="openai-exp",
)

tracer = FITracer(trace_provider.get_tracer(__name__))
const { trace, context, SpanStatusCode, propagation } = require("@opentelemetry/api");
const { AsyncLocalStorageContextManager } = require("@opentelemetry/context-async-hooks");
const { register, ProjectType } = require("@traceai/fi-core");
const { registerInstrumentations } = require("@opentelemetry/instrumentation");
const { suppressTracing } = require("@opentelemetry/core");

context.setGlobalContextManager(new AsyncLocalStorageContextManager());

const tracerProvider = register({
    projectName: "manual-instrumentation-example",
    projectType: ProjectType.OBSERVE,
    sessionName: "manual-instrumentation-example-session"
});

const tracer = tracerProvider.getTracer("manual-instrumentation-example");

Instrument with spans

Choose the span kind that matches your operation, then pick your instrumentation style.

Use chain spans for general logic, processing pipelines, and code blocks.

from opentelemetry.trace.status import Status, StatusCode

with tracer.start_as_current_span(
    "my-span-name",
    fi_span_kind="chain",
) as span:
    span.set_input("input")
    span.set_output("output")
    span.set_status(Status(StatusCode.OK))
tracer.startActiveSpan("my-span-name", { attributes: { "fi.span.kind": "chain" } }, (span) => {
    span.setAttribute("input", "input");
    span.setAttribute("output", "output");
    span.setStatus({ code: SpanStatusCode.OK });
    span.end();
});

Plain text output:

@tracer.chain
def decorated_chain_with_plain_text_output(input: str) -> str:
    return "output"

decorated_chain_with_plain_text_output("input")

JSON output:

@tracer.chain
def decorated_chain_with_json_output(input: str) -> Dict[str, Any]:
    return {"output": "output"}

decorated_chain_with_json_output("input")

Override span name:

@tracer.chain(name="decorated-chain-with-overriden-name")
def this_name_should_be_overriden(input: str) -> Dict[str, Any]:
    return {"output": "output"}

this_name_should_be_overriden("input")

Use agent spans for orchestrator functions — typically a top-level or near top-level span.

with tracer.start_as_current_span(
    "agent-span-with-plain-text-io",
    fi_span_kind="agent",
) as span:
    span.set_input("input")
    span.set_output("output")
    span.set_status(Status(StatusCode.OK))
tracer.startActiveSpan("agent-span-with-plain-text-io", { attributes: { "fi.span.kind": "agent" } }, (span) => {
    span.setAttribute("input", "input");
    span.setAttribute("output", "output");
    span.setStatus({ code: SpanStatusCode.OK });
    span.end();
});
@tracer.agent
def decorated_agent(input: str) -> str:
    return "output"

decorated_agent("input")

Use tool spans for tool calls. Attach name, description, and parameters for full call context in the dashboard.

with tracer.start_as_current_span(
    "tool-span",
    fi_span_kind="tool",
) as span:
    span.set_input("input")
    span.set_output("output")
    span.set_tool(
        name="tool-name",
        description="tool-description",
        parameters={"input": "input"},
    )
    span.set_status(Status(StatusCode.OK))
tracer.startActiveSpan("tool-span", { attributes: { "fi.span.kind": "tool" } }, (span) => {
    span.setAttribute("input", "input");
    span.setAttribute("output", "output");
    span.setAttribute("tool.name", "tool-name");
    span.setAttribute("tool.description", "tool-description");
    span.setAttribute("tool.parameters", JSON.stringify({"input": "input"}));
    span.setStatus({ code: SpanStatusCode.OK });
    span.end();
});
@tracer.tool(
    name="tool-name",
    description="tool-description",
    parameters={"input": "input"},
)
def decorated_tool(input: str) -> str:
    return "output"

decorated_tool("input")

Use LLM spans for direct LLM calls.

with tracer.start_as_current_span(
    "llm-span",
    fi_span_kind="llm",
) as span:
    span.set_input("input")
    span.set_output("output")
    span.set_status(Status(StatusCode.OK))
tracer.startActiveSpan("llm-span", { attributes: { "fi.span.kind": "llm" } }, (span) => {
    span.setAttribute("input", "input");
    span.setAttribute("output", "output");
    span.setStatus({ code: SpanStatusCode.OK });
    span.end();
});
@tracer.llm
def decorated_llm(input: str) -> str:
    return "output"

decorated_llm("input")

Use retriever spans for document retrieval operations.

with tracer.start_as_current_span(
    "retriever-span",
    fi_span_kind="retriever",
) as span:
    span.set_input("input")
    span.set_output("output")
    span.set_status(Status(StatusCode.OK))
tracer.startActiveSpan("retriever-span", { attributes: { "fi.span.kind": "retriever" } }, (span) => {
    span.setAttribute("input", "input");
    span.setAttribute("output", "output");
    span.setStatus({ code: SpanStatusCode.OK });
    span.end();
});
@tracer.retriever
def decorated_retriever(input: str) -> str:
    return "output"

decorated_retriever("input")

Key concepts

  • FITracer — Future AGI wrapper around the standard OTel tracer. Adds set_input() / set_output() / set_tool() on spans, automatic context injection, and typed decorators (@tracer.chain, @tracer.agent, @tracer.tool, @tracer.llm, @tracer.retriever).
  • FI Span Kinds — Typed labels that control how spans are rendered in the Future AGI UI. Set via fi_span_kind in Python or fi.span.kind attribute in JS/TS.
  • Decorators — Wrap entire functions; input/output/status are captured automatically from function args and return values.
  • Context managers — Wrap specific code blocks; you must call set_input(), set_output(), and set_status() manually.
  • set_tool() — Sets tool.name, tool.description, and tool.parameters on a tool span for full call context in the dashboard.

FI Span Kinds reference:

Span KindUse
chainGeneral logic operations, functions, or code blocks
llmMaking LLM calls
toolCompleting tool calls
retrieverRetrieving documents
embeddingGenerating embeddings
agentAgent invocations — typically a top-level or near top-level span
rerankerReranking retrieved context
guardrailGuardrail checks
evaluatorEvaluators
unknownUnknown

What you can do next

Was this page helpful?

Questions & Discussion