Tracing

Set up OpenTelemetry tracing across Python, TypeScript, Java, and C#. Auto-instrument 45+ frameworks or create custom spans with FITracer.

📝
TL;DR
  • register() sets up the tracer provider in two lines, all languages
  • Auto-instrument with traceai-* packages (45+ frameworks) or create custom spans with FITracer
  • Context helpers attach session, user, metadata, and tags to all spans in a block
  • TraceConfig controls privacy masking, PII redaction covers 6 data types automatically

The pattern is the same across all four languages: call register() once to set up the provider, then either auto-instrument your frameworks or use FITracer for custom spans. LLM calls, retrieval steps, and agent actions get captured as OpenTelemetry spans and sent to your dashboard.

Note

Requires FI_API_KEY and FI_SECRET_KEY in your environment. For conceptual background on traces, spans, and attributes, see the Tracing guide.

Quick Example

pip install fi-instrumentation-otel traceai-openai
from fi_instrumentation import register
from fi_instrumentation.fi_types import ProjectType
from traceai_openai import OpenAIInstrumentor

# 1. Register the tracer provider
trace_provider = register(
    project_name="my-project",
    project_type=ProjectType.OBSERVE,
)

# 2. Instrument your framework
OpenAIInstrumentor().instrument(tracer_provider=trace_provider)

# 3. Use OpenAI as normal - all calls are now traced
import openai
client = openai.OpenAI()
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "What is Python?"}],
)
npm install @traceai/openai @traceai/fi-core @opentelemetry/instrumentation
import { register, ProjectType } from "@traceai/fi-core";
import { OpenAIInstrumentation } from "@traceai/openai";
import { registerInstrumentations } from "@opentelemetry/instrumentation";
import OpenAI from "openai";

const tracerProvider = register({
  projectName: "my-project",
  projectType: ProjectType.OBSERVE,
});

registerInstrumentations({
  tracerProvider,
  instrumentations: [new OpenAIInstrumentation()],
});

const openai = new OpenAI();
const response = await openai.chat.completions.create({
  model: "gpt-4o-mini",
  messages: [{ role: "user", content: "Hello!" }],
});
<!-- For Spring Boot apps -->
<dependency>
    <groupId>com.github.future-agi.traceAI</groupId>
    <artifactId>traceai-spring-boot-starter</artifactId>
    <version>v1.0.0</version>
</dependency>
<dependency>
    <groupId>com.github.future-agi.traceAI</groupId>
    <artifactId>traceai-java-openai</artifactId>
    <version>v1.0.0</version>
</dependency>
import ai.traceai.TraceAI;
import ai.traceai.TraceConfig;
import ai.traceai.openai.TracedOpenAIClient;

// Initialize from environment variables
TraceAI.initFromEnvironment();

// Wrap your client
TracedOpenAIClient tracedClient = new TracedOpenAIClient(openAIClient);
var response = tracedClient.createChatCompletion(params);

Set FI_API_KEY, FI_SECRET_KEY, FI_BASE_URL, and FI_PROJECT_NAME as environment variables.

dotnet add package fi-instrumentation-otel
using FIInstrumentation;
using FIInstrumentation.Types;

var tracer = TraceAI.Register(opts =>
{
    opts.ProjectName = "my-project";
    opts.ProjectType = ProjectType.Observe;
});

// Create traced LLM calls with convenience methods
var result = tracer.Llm("openai-call", span =>
{
    span.SetInput("What is C#?");
    var response = CallOpenAI("What is C#?");
    span.SetOutput(response);
    return response;
});

TraceAI.Shutdown();

register()

Creates an OpenTelemetry tracer provider configured to export spans to your Future AGI dashboard.

from fi_instrumentation import register
from fi_instrumentation.fi_types import ProjectType, Transport

trace_provider = register(
    project_name="my-project",
    project_type=ProjectType.OBSERVE,
    transport=Transport.HTTP,
    batch=True,
    verbose=True,
)
ParameterTypeDefaultDescription
project_namestr / NoneFI_PROJECT_NAME env varProject identifier in the dashboard
project_typeProjectTypeEXPERIMENTEXPERIMENT (dev, supports eval tags) or OBSERVE (production)
project_version_namestr / NoneNoneVersion label (EXPERIMENT only)
eval_tagslist / NoneNoneEvaluation configs for automated span scoring (EXPERIMENT only)
metadatadict / NoneNoneCustom metadata attached to all spans
batchboolTrueTrue = BatchSpanProcessor, False = SimpleSpanProcessor
set_global_tracer_providerboolFalseRegister as the global OpenTelemetry default
headersdict / NoneNoneCustom HTTP headers (auto-populated from API keys if not set)
verboseboolTruePrint configuration details on startup
transportTransportHTTPHTTP or GRPC
semantic_conventionSemanticConventionFIAttribute naming convention

Returns: TracerProvider - pass this to .instrument(tracer_provider=...) on any instrumentor.

import { register, ProjectType, Transport } from "@traceai/fi-core";

const tracerProvider = register({
  projectName: "my-project",
  projectType: ProjectType.OBSERVE,
  transport: Transport.HTTP,
  batch: true,
  verbose: true,
});
ParameterTypeDefaultDescription
projectNamestringFI_PROJECT_NAME env varProject identifier
projectTypeProjectTypeEXPERIMENTEXPERIMENT or OBSERVE
projectVersionNamestringundefinedVersion label (EXPERIMENT only)
evalTagsEvalTag[]undefinedEvaluation configs (EXPERIMENT only)
sessionNamestringundefinedSession name (OBSERVE only)
metadataRecordundefinedCustom metadata
batchbooleanfalseUse batch span processor
setGlobalTracerProviderbooleantrueRegister as global provider
headersFIHeadersundefinedCustom HTTP headers
verbosebooleanfalseVerbose logging
endpointstringFI_BASE_URLCustom endpoint
transportTransportHTTPHTTP or GRPC

Returns: FITracerProvider

import ai.traceai.TraceAI;
import ai.traceai.TraceConfig;

// Option 1: From environment variables
TraceAI.initFromEnvironment();

// Option 2: Programmatic configuration
TraceAI.init(TraceConfig.builder()
    .baseUrl("https://api.futureagi.com")
    .apiKey("your-api-key")
    .secretKey("your-secret-key")
    .projectName("my-project")
    .batchSize(512)
    .exportIntervalMs(5000)
    .build()
);

FITracer tracer = TraceAI.getTracer();
Builder methodDefaultDescription
baseUrl(String)FI_BASE_URL env varBackend endpoint
apiKey(String)FI_API_KEY env varAPI authentication
secretKey(String)FI_SECRET_KEY env varSecondary authentication
projectName(String)FI_PROJECT_NAME env varProject identifier
serviceName(String)project nameOpenTelemetry service name
hideInputs(boolean)falseSuppress input values
hideOutputs(boolean)falseSuppress output values
hideInputMessages(boolean)falseSuppress input messages
hideOutputMessages(boolean)falseSuppress output messages
enableConsoleExporter(boolean)falseLog spans to console
batchSize(int)512Span batch size
exportIntervalMs(long)5000Export interval in ms

For Spring Boot, add the starter dependency and configure via application.yml:

traceai:
  enabled: true
  base-url: https://api.futureagi.com
  api-key: ${FI_API_KEY}
  secret-key: ${FI_SECRET_KEY}
  project-name: my-app
  batch-size: 512
  export-interval-ms: 5000

The FITracer bean is auto-created and available for injection.

using FIInstrumentation;
using FIInstrumentation.Types;

var tracer = TraceAI.Register(opts =>
{
    opts.ProjectName = "my-project";
    opts.ProjectType = ProjectType.Observe;
    opts.Transport = Transport.Http;
    opts.Batch = true;
    opts.Verbose = true;
    opts.TraceConfig = TraceConfig.Builder()
        .HideInputs(false)
        .HideOutputs(false)
        .Build();
});
PropertyTypeDefaultDescription
ProjectNamestringFI_PROJECT_NAME env varProject identifier
ProjectTypeProjectTypeExperimentExperiment or Observe
ProjectVersionNamestringnullVersion label (Experiment only)
EvalTagsList<EvalTag>nullEvaluation configs (Experiment only)
MetadataDictionarynullCustom metadata
BatchbooltrueUse batch span processor
SetGlobalTracerProviderbooltrueRegister as global provider
TransportTransportHttpHttp or Grpc
ApiKeystringFI_API_KEY env varAPI key
SecretKeystringFI_SECRET_KEY env varSecret key
TraceConfigTraceConfignullPrivacy/masking configuration
EnableConsoleExporterboolfalseLog spans to console
VerbosebooltruePrint config on startup

Returns: FITracer - use for creating custom spans.

ProjectType

ValueUse for
EXPERIMENTDevelopment and testing. Supports eval tags and version names.
OBSERVEProduction monitoring. No eval tags, no version names.

SemanticConvention (Python/TypeScript)

Controls how span attributes are named. We recommend OTEL_GENAI for standard OpenTelemetry GenAI conventions.

ValueAttribute prefixUse for
OTEL_GENAIgen_ai.*Recommended - OpenTelemetry GenAI standard
FIfi.*Legacy Future AGI format (default)
OPENINFERENCEopeninference.*Arize Phoenix compatibility
OPENLLMETRYtraceloop.*Traceloop / OpenLLMetry compatibility

Tip

Pass semantic_convention=SemanticConvention.OTEL_GENAI for the best interoperability with other OpenTelemetry tools.

FITracer - Custom Spans

Beyond auto-instrumentation, FITracer lets you create custom spans for your own logic - agent steps, chain stages, tool calls, or any operation you want to trace.

Span Kinds

All languages share the same span kinds:

KindUse for
LLMLanguage model inference calls
CHAINSequential pipeline steps
AGENTAutonomous agent actions
TOOLTool/function calls
EMBEDDINGVector generation
RETRIEVERDocument retrieval (RAG)
RERANKERRe-ranking operations
GUARDRAILSafety/validation checks
EVALUATORQuality scoring
UNKNOWNUnspecified or unexpected span type
WORKFLOWCustom pipeline steps (Java only)
CONVERSATIONVoice/conversational AI (Java/C#)
VECTOR_DBVector database operations (Java/C#)

Decorators and Convenience Methods

Python’s FITracer provides decorators for clean span creation:

from fi_instrumentation import register
from fi_instrumentation.fi_types import ProjectType

trace_provider = register(
    project_name="my-project",
    project_type=ProjectType.OBSERVE,
)
tracer = trace_provider.get_tracer(__name__)

# Use the FITracer wrapper for decorators
from fi_instrumentation import FITracer
fi_tracer = FITracer(tracer)

@fi_tracer.agent(name="research-agent")
def research_agent(query):
    # This entire function becomes an AGENT span
    results = search(query)
    return summarize(results)

@fi_tracer.chain(name="rag-pipeline")
def rag_pipeline(question):
    docs = retrieve(question)
    return generate(question, docs)

@fi_tracer.tool(
    name="web-search",
    description="Searches the web",
    parameters={"query": {"type": "string"}}
)
def web_search(query):
    return requests.get(f"https://api.search.com?q={query}").json()

You can also use context managers for manual span creation:

from fi_instrumentation.fi_types import FiSpanKindValues

with fi_tracer.start_as_current_span(
    "llm-call",
    fi_span_kind=FiSpanKindValues.LLM,
) as span:
    span.set_input(value="What is Python?")
    response = call_llm("What is Python?")
    span.set_output(value=response)
    span.set_attributes({
        "gen_ai.request.model": "gpt-4o",
        "gen_ai.usage.input_tokens": 10,
        "gen_ai.usage.output_tokens": 150,
    })

TypeScript uses OpenTelemetry’s standard startActiveSpan pattern:

import { trace } from "@opentelemetry/api";

const tracer = trace.getTracer("my-app");

// Manual span creation
tracer.startActiveSpan("rag-pipeline", (span) => {
  span.setAttribute("gen_ai.span.kind", "CHAIN");
  span.setAttribute("input.value", question);

  const docs = retrieve(question);
  const result = generate(question, docs);

  span.setAttribute("output.value", result);
  span.end();
  return result;
});

Context management functions let you set session, user, and metadata:

import {
  setSession, setUser, setMetadata, setTags,
  getAttributesFromContext
} from "@traceai/fi-core";
import { context } from "@opentelemetry/api";

const ctx = setSession(context.active(), { sessionId: "sess-123" });
const ctx2 = setUser(ctx, { userId: "user-456" });

context.with(ctx2, () => {
  // All spans created here inherit session and user
  tracer.startActiveSpan("operation", (span) => {
    // span automatically gets session.id and user.id
    span.end();
  });
});

Java offers both lambda-based and manual span creation:

import ai.traceai.FITracer;
import ai.traceai.FISpanKind;

FITracer tracer = TraceAI.getTracer();

// Lambda-based - auto-manages span lifecycle
String result = tracer.trace("rag-pipeline", FISpanKind.CHAIN, (span) -> {
    tracer.setInputValue(span, question);

    String docs = tracer.trace("retrieve", FISpanKind.RETRIEVER, (rSpan) -> {
        tracer.setInputValue(rSpan, question);
        var retrieved = vectorDb.search(question);
        tracer.setOutputValue(rSpan, tracer.toJson(retrieved));
        return retrieved;
    });

    String answer = tracer.trace("generate", FISpanKind.LLM, (lSpan) -> {
        tracer.setInputMessages(lSpan, List.of(
            tracer.message("system", "Answer using the context."),
            tracer.message("user", question)
        ));
        var resp = llm.generate(question, docs);
        tracer.setOutputMessages(lSpan, List.of(
            tracer.message("assistant", resp)
        ));
        tracer.setTokenCounts(lSpan, 50, 200, 250);
        return resp;
    });

    tracer.setOutputValue(span, answer);
    return answer;
});

Manual span creation for more control:

import io.opentelemetry.api.trace.Span;
import io.opentelemetry.context.Context;

Span span = tracer.startSpan("tool-call", FISpanKind.TOOL);
try {
    tracer.setInputValue(span, inputJson);
    String result = executeTool(inputJson);
    tracer.setOutputValue(span, result);
    span.setStatus(StatusCode.OK);
} catch (Exception e) {
    tracer.setError(span, e);
} finally {
    span.end();
}

C# provides typed convenience methods for each span kind:

var tracer = TraceAI.Register(opts =>
{
    opts.ProjectName = "my-project";
    opts.ProjectType = ProjectType.Observe;
});

// Convenience methods for each span kind
var result = tracer.Chain("rag-pipeline", span =>
{
    span.SetInput("What is quantum computing?");

    var docs = tracer.Tool("vector-search", toolSpan =>
    {
        toolSpan.SetTool("search", "Searches vector DB");
        toolSpan.SetInput("quantum computing");
        var results = vectorDb.Search("quantum computing");
        toolSpan.SetOutput(results);
        return results;
    });

    var answer = tracer.Llm("generate", llmSpan =>
    {
        llmSpan.SetAttribute(SemanticConventions.GenAiRequestModel, "gpt-4o");
        llmSpan.SetInputMessages(new List<Dictionary<string, string>>
        {
            FITracer.Message("user", "What is quantum computing?")
        });
        var resp = llm.Generate("What is quantum computing?", docs);
        llmSpan.SetOutputMessages(new List<Dictionary<string, string>>
        {
            FITracer.Message("assistant", resp)
        });
        llmSpan.SetTokenCounts(50, 200, 250);
        return resp;
    });

    span.SetOutput(answer);
    return answer;
});

// Async variants
await tracer.AgentAsync("research-agent", async span =>
{
    span.SetInput("Research topic X");
    var result = await RunResearchAsync("topic X");
    span.SetOutput(result);
});

Manual span creation:

using var span = tracer.StartSpan("custom-op", FISpanKind.Chain);
span.SetInput("input data");
span.SetOutput("output data");
// span.Dispose() ends the span automatically

FISpan Methods

All languages provide methods on the span object for setting structured data:

MethodDescriptionAvailable in
set_input(value, mime_type=) / SetInput(value, mimeType)Set span input value (text or JSON). mime_type accepts "text/plain" or "application/json"Python, C#
set_output(value, mime_type=) / SetOutput(value, mimeType)Set span output valuePython, C#
set_tool(name, description, parameters) / SetTool(...)Attach tool metadataPython, C#
set_attributes(dict) / SetAttribute(key, value)Set custom attributesAll
setInputValue(span, value)Set input on spanJava
setOutputValue(span, value)Set output on spanJava
setInputMessages(span, messages) / SetInputMessages(messages)Set chat message historyJava, C#
setOutputMessages(span, messages) / SetOutputMessages(messages)Set response messagesJava, C#
setTokenCounts(span, in, out, total) / SetTokenCounts(in, out, total)Set token usageJava, C#
setError(span, exception) / SetError(exception)Record an exceptionJava, C#

Note

In Java, these methods live on FITracer and take the span as the first argument (e.g. tracer.setInputValue(span, value)). In Python and C#, they’re called directly on the span object.

Context Helpers

Attach metadata, tags, session IDs, and user IDs to spans. These apply to all spans created within the scope.

from fi_instrumentation import (
    using_session, using_user, using_metadata,
    using_tags, using_prompt_template, using_attributes,
    suppress_tracing
)

# Individual context managers
with using_session("session-abc-123"):
    with using_user("user-456"):
        response = client.chat.completions.create(...)

with using_metadata({"environment": "production", "version": "2.1"}):
    response = client.chat.completions.create(...)

with using_tags(["rag-pipeline", "v2"]):
    response = client.chat.completions.create(...)

# Prompt template tracking
with using_prompt_template(
    template="Answer {question} using {context}",
    label="production",
    version="v1.2",
    variables={"question": "...", "context": "..."}
):
    response = client.chat.completions.create(...)

# Combined - set everything at once
with using_attributes(
    session_id="session-abc",
    user_id="user-456",
    metadata={"env": "prod"},
    tags=["rag", "v2"],
    prompt_template="Answer {question}",
    prompt_template_version="v1.2",
):
    response = client.chat.completions.create(...)

# Suppress tracing for a block
with suppress_tracing():
    # These calls won't be traced
    result = client.chat.completions.create(...)
import {
  setSession, getSession, clearSession,
  setUser, getUser, clearUser,
  setMetadata, setTags,
  setPromptTemplate,
  getAttributesFromContext
} from "@traceai/fi-core";
import { context } from "@opentelemetry/api";

// Build up context with multiple attributes
let ctx = context.active();
ctx = setSession(ctx, { sessionId: "session-abc-123" });
ctx = setUser(ctx, { userId: "user-456" });
ctx = setMetadata(ctx, { environment: "production" });
ctx = setTags(ctx, ["rag-pipeline", "v2"]);
ctx = setPromptTemplate(ctx, {
  template: "Answer {{question}} using {{context}}",
  variables: { question: "...", context: "..." },
  version: "v1.2",
});

// All spans created in this context inherit these attributes
context.with(ctx, async () => {
  const response = await openai.chat.completions.create({
    model: "gpt-4o-mini",
    messages: [{ role: "user", content: "Hello" }],
  });
});

// Read attributes back from context
const attrs = getAttributesFromContext(ctx);

Java uses AutoCloseable scopes with try-with-resources:

import ai.traceai.ContextAttributes;

// Session tracking
try (var ignored = ContextAttributes.usingSession("session-abc-123")) {
    // All spans here get session.id and gen_ai.conversation.id
    var response = tracedClient.createChatCompletion(params);
}

// User tracking
try (var ignored = ContextAttributes.usingUser("user-456")) {
    var response = tracedClient.createChatCompletion(params);
}

// Metadata
try (var ignored = ContextAttributes.usingMetadata(Map.of(
    "environment", "production",
    "version", "2.1"
))) {
    var response = tracedClient.createChatCompletion(params);
}

// Tags
try (var ignored = ContextAttributes.usingTags(List.of("rag-pipeline", "v2"))) {
    var response = tracedClient.createChatCompletion(params);
}

// Nest them for combined context
try (var s = ContextAttributes.usingSession("session-abc");
     var u = ContextAttributes.usingUser("user-456");
     var m = ContextAttributes.usingMetadata(Map.of("env", "prod"))) {
    var response = tracedClient.createChatCompletion(params);
}

// Read current attributes
Map<String, Object> attrs = ContextAttributes.getAttributesFromContext();

C# uses IDisposable scopes with using statements:

using FIInstrumentation.Context;

// Session and user tracking
using (ContextAttributes.UsingSession("session-abc-123"))
using (ContextAttributes.UsingUser("user-456"))
{
    tracer.Llm("llm-call", span =>
    {
        // span automatically gets session.id and user.id
        span.SetInput("Hello!");
    });
}

// Metadata and tags
using (ContextAttributes.UsingMetadata(new Dictionary<string, object>
{
    ["environment"] = "production",
    ["version"] = "2.1"
}))
using (ContextAttributes.UsingTags(new List<string> { "rag-pipeline", "v2" }))
{
    tracer.Chain("pipeline", span => { /* ... */ });
}

// Prompt template tracking
using (ContextAttributes.UsingPromptTemplate(
    template: "Answer {question} using {context}",
    label: "production",
    version: "v1.2",
    variables: new Dictionary<string, object>
    {
        ["question"] = "...",
        ["context"] = "..."
    }
))
{
    tracer.Llm("templated-call", span => { /* ... */ });
}

// Combined - set everything at once
using (ContextAttributes.UsingAttributes(
    sessionId: "session-abc",
    userId: "user-456",
    metadata: new Dictionary<string, object> { ["env"] = "prod" },
    tags: new List<string> { "rag", "v2" }
))
{
    tracer.Chain("full-context", span => { /* ... */ });
}

Suppress Tracing

Temporarily disable tracing for a block of code. Useful for health checks, internal calls, or operations you don’t want in your traces. Available in Python and C# only - Java and TypeScript don’t have this API.

from fi_instrumentation import suppress_tracing

with suppress_tracing():
    # Nothing in this block is traced
    result = client.chat.completions.create(...)
using FIInstrumentation.Context;

using (new SuppressTracing())
{
    // Nothing in this block is traced
}

TraceConfig

Control what data gets captured. Useful for privacy compliance, reducing payload size, or masking sensitive data.

from fi_instrumentation import TraceConfig

config = TraceConfig(
    hide_inputs=True,
    hide_outputs=True,
    pii_redaction=True,
)

# Pass to instrumentors
OpenAIInstrumentor().instrument(
    tracer_provider=trace_provider,
    config=config,
)
TraceAI.init(TraceConfig.builder()
    .baseUrl("https://api.futureagi.com")
    .apiKey("your-key")
    .projectName("my-project")
    .hideInputs(true)
    .hideOutputs(true)
    .hideInputMessages(true)
    .hideOutputMessages(true)
    .build()
);

In TypeScript, TraceConfig is passed per-instrumentor, not to register():

import { OpenAIInstrumentation } from "@traceai/openai";
import { registerInstrumentations } from "@opentelemetry/instrumentation";

registerInstrumentations({
  tracerProvider,
  instrumentations: [
    new OpenAIInstrumentation({
      traceConfig: {
        hideInputs: true,
        hideOutputs: true,
        hideInputImages: true,
        hideEmbeddingVectors: true,
        base64ImageMaxLength: 16000,
        piiRedaction: true,
      },
    }),
  ],
});
var tracer = TraceAI.Register(opts =>
{
    opts.ProjectName = "my-project";
    opts.TraceConfig = TraceConfig.Builder()
        .HideInputs(true)
        .HideOutputs(true)
        .HideInputImages(true)
        .HideEmbeddingVectors(true)
        .Base64ImageMaxLength(16000)
        .Build();
});
FieldTypeDefaultWhat it hides
hide_inputsboolFalseAll input values and messages
hide_outputsboolFalseAll output values and messages
hide_input_messagesboolFalseInput messages only
hide_output_messagesboolFalseOutput messages only
hide_input_imagesboolFalseImages in inputs
hide_input_textboolFalseText in input messages
hide_output_textboolFalseText in output messages
hide_embedding_vectorsboolFalseEmbedding vectors
hide_llm_invocation_parametersboolFalseModel parameters (temperature, etc.)
base64_image_max_lengthint32000Truncate base64 images beyond this length
pii_redactionboolFalseAutomatically mask PII (Python only)

Each field maps to an environment variable with the FI_ prefix (e.g. hide_inputs -> FI_HIDE_INPUTS).

PII Redaction (Python)

When pii_redaction=True, the SDK automatically detects and masks 6 types of personally identifiable information:

PII TypePatternReplaced with
Email addressesuser@example.com<EMAIL_ADDRESS>
Social Security Numbers123-45-6789<SSN>
Credit card numbers4111-1111-1111-1111<CREDIT_CARD>
API keyssk_live_..., pk_test_...<API_KEY>
IP addresses (IPv4)192.168.1.1<IP_ADDRESS>
Phone numbers+1-555-123-4567<PHONE_NUMBER>
# Enable via code
config = TraceConfig(pii_redaction=True)

# Or via environment variable
# export FI_PII_REDACTION=true

# Direct usage
from fi_instrumentation.instrumentation.pii_redaction import redact_pii_in_string

redacted = redact_pii_in_string("Email me at test@example.com")
# "Email me at <EMAIL_ADDRESS>"

EvalTags - Attach Evaluations to Traces

EvalTags let you configure automatic evaluations that run server-side on your traced spans. Attach them during register() and the platform scores spans as they arrive.

from fi_instrumentation import register
from fi_instrumentation.fi_types import (
    ProjectType, EvalTag, EvalTagType,
    EvalSpanKind, EvalName, ModelChoices
)

trace_provider = register(
    project_name="my-project",
    project_type=ProjectType.EXPERIMENT,
    project_version_name="v1.0",
    eval_tags=[
        EvalTag(
            type=EvalTagType.OBSERVATION_SPAN,
            value=EvalSpanKind.LLM,
            eval_name=EvalName.GROUNDEDNESS,
            model=ModelChoices.TURING_FLASH,
        ),
        EvalTag(
            type=EvalTagType.OBSERVATION_SPAN,
            value=EvalSpanKind.LLM,
            eval_name=EvalName.TOXICITY,
            model=ModelChoices.TURING_FLASH,
        ),
    ],
)
import {
  register, ProjectType, EvalTag,
  EvalTagType, EvalSpanKind, EvalName, ModelChoices
} from "@traceai/fi-core";

const tracerProvider = register({
  projectName: "my-project",
  projectType: ProjectType.EXPERIMENT,
  projectVersionName: "v1.0",
  evalTags: [
    await EvalTag.create({
      type: EvalTagType.OBSERVATION_SPAN,
      value: EvalSpanKind.LLM,
      eval_name: EvalName.GROUNDEDNESS,
      model: ModelChoices.TURING_FLASH,
    }),
    await EvalTag.create({
      type: EvalTagType.OBSERVATION_SPAN,
      value: EvalSpanKind.LLM,
      eval_name: EvalName.TOXICITY,
      model: ModelChoices.TURING_FLASH,
    }),
  ],
});

Note

EvalTag.create() is async in TypeScript because it validates the eval configuration with the server.

using FIInstrumentation;
using FIInstrumentation.Types;

var tracer = TraceAI.Register(opts =>
{
    opts.ProjectName = "my-project";
    opts.ProjectType = ProjectType.Experiment;
    opts.ProjectVersionName = "v1.0";
    opts.EvalTags = new List<EvalTag>
    {
        new EvalTag(EvalSpanKind.Llm, EvalName.Groundedness)
        {
            Model = ModelChoices.TuringFlash,
        },
        new EvalTag(EvalSpanKind.Llm, EvalName.Toxicity)
        {
            Model = ModelChoices.TuringFlash,
        },
    };
});

EvalSpanKind

Which span types to evaluate:

ValueDescription
LLMLanguage model calls
RETRIEVERDocument retrieval spans
TOOLTool/function calls
AGENTAgent spans
EMBEDDINGEmbedding generation
RERANKERRe-ranking operations

ModelChoices

Which evaluation model to use:

ValueDescription
TURING_FLASHFast evaluation model
TURING_SMALLSmall evaluation model
TURING_LARGEHigh-accuracy evaluation model
PROTECTSafety-focused model
PROTECT_FLASHFast safety model

Note

EvalTags only work with ProjectType.EXPERIMENT. For production monitoring without evals, use ProjectType.OBSERVE.

Instrumentors

Each framework has its own instrumentor package. Install the one for your framework and call .instrument().

# Pattern is the same for every framework:
from traceai_<framework> import <Framework>Instrumentor
<Framework>Instrumentor().instrument(tracer_provider=trace_provider)
PackageFrameworkInstrumentor class
traceai-openaiOpenAIOpenAIInstrumentor
traceai-anthropicAnthropicAnthropicInstrumentor
traceai-google-genaiGoogle GenAIGoogleGenAIInstrumentor
traceai-vertexaiVertex AIVertexAIInstrumentor
traceai-bedrockAWS BedrockBedrockInstrumentor
traceai-mistralaiMistral AIMistralAIInstrumentor
traceai-groqGroqGroqInstrumentor
traceai-litellmLiteLLMLiteLLMInstrumentor
traceai-cohereCohereCohereInstrumentor
traceai-ollamaOllamaOllamaInstrumentor
traceai-deepseekDeepSeekDeepSeekInstrumentor
traceai-togetherTogether AITogetherInstrumentor
traceai-fireworksFireworks AIFireworksInstrumentor
traceai-cerebrasCerebrasCerebrasInstrumentor
traceai-xaixAI / GrokXAIInstrumentor
traceai-vllmvLLMVLLMInstrumentor
traceai-portkeyPortkeyPortkeyInstrumentor
traceai-huggingfaceHuggingFaceHuggingFaceInstrumentor
PackageFrameworkInstrumentor class
traceai-langchainLangChain / LangGraphLangChainInstrumentor
traceai-llamaindexLlamaIndexLlamaIndexInstrumentor
traceai-crewaiCrewAICrewAIInstrumentor
traceai-openai-agentsOpenAI Agents SDKOpenAIAgentsInstrumentor
traceai-autogenMicrosoft AutoGenAutoGenInstrumentor
traceai-smolagentsHuggingFace SmolAgentsSmolAgentsInstrumentor
traceai-google-adkGoogle Agent Dev KitGoogleADKInstrumentor
traceai-claude-agent-sdkClaude Agent SDKClaudeAgentSDKInstrumentor
traceai-pydantic-aiPydantic AIPydanticAIInstrumentor
traceai-strandsAWS Strands AgentsStrandsInstrumentor
traceai-agnoAgnoAgnoInstrumentor
traceai-beeaiIBM BeeAIBeeAIInstrumentor
traceai-haystackHaystackHaystackInstrumentor
traceai-dspyDSPyDSPyInstrumentor
traceai-guardrailsGuardrails AIGuardrailsInstrumentor
traceai-instructorInstructorInstructorInstrumentor
traceai-mcpModel Context ProtocolMCPInstrumentor
PackageFrameworkInstrumentor class
traceai-pipecatPipecatPipecatInstrumentor
traceai-livekitLiveKitLiveKitInstrumentor
PackageFrameworkInstrumentor class
traceai-pineconePineconePineconeInstrumentor
traceai-chromadbChromaDBChromaDBInstrumentor
traceai-qdrantQdrantQdrantInstrumentor
traceai-weaviateWeaviateWeaviateInstrumentor
traceai-milvusMilvusMilvusInstrumentor
traceai-lancedbLanceDBLanceDBInstrumentor
traceai-mongodbMongoDBMongoDBInstrumentor
traceai-pgvectorpgvectorPgVectorInstrumentor
traceai-redisRedisRedisInstrumentor

Cleanup

To remove instrumentation (useful in tests or serverless cleanup):

OpenAIInstrumentor().uninstrument()
TraceAI.shutdown();  // Flushes remaining spans and shuts down
TraceAI.Shutdown();  // Flushes remaining spans and shuts down

For per-framework setup guides with full examples, see the Auto-Instrumentation docs.

Other Languages

The tables above show Python packages. TypeScript, Java, and C# have their own instrumentation libraries:

TypeScript packages follow the @traceai/<framework> pattern. All use OpenTelemetry’s registerInstrumentations().

import { registerInstrumentations } from "@opentelemetry/instrumentation";
import { OpenAIInstrumentation } from "@traceai/openai";
import { AnthropicInstrumentation } from "@traceai/anthropic";
import { LangChainInstrumentation } from "@traceai/langchain";
import { PineconeInstrumentation } from "@traceai/pinecone";

registerInstrumentations({
  tracerProvider,
  instrumentations: [
    new OpenAIInstrumentation(),
    new AnthropicInstrumentation(),
    new LangChainInstrumentation(),
    new PineconeInstrumentation(),
  ],
});

40+ packages available including all LLM providers, frameworks, and vector DBs from the Python list, plus @traceai/vercel for Vercel/Next.js and @traceai/mastra.

Java uses the Traced* wrapper pattern. Each integration wraps the native client:

// LLM Providers
TracedOpenAIClient traced = new TracedOpenAIClient(openAIClient);
TracedAnthropicClient traced = new TracedAnthropicClient(anthropicClient);
TracedBedrockRuntimeClient traced = new TracedBedrockRuntimeClient(bedrockClient);
TracedGenerativeModel traced = new TracedGenerativeModel(model);  // Google GenAI
TracedOllamaAPI traced = new TracedOllamaAPI(ollamaAPI);
TracedCohereClient traced = new TracedCohereClient(cohereClient);
TracedWatsonxAI traced = new TracedWatsonxAI(watsonxClient);

// Vector Databases
TracedPineconeIndex traced = new TracedPineconeIndex(index, "my-index");
TracedQdrantClient traced = new TracedQdrantClient(qdrantClient);
TracedMilvusClient traced = new TracedMilvusClient(milvusClient);
TracedChromaCollection traced = new TracedChromaCollection(collection);
TracedMongoVectorSearch traced = new TracedMongoVectorSearch(collection);
TracedRedisVectorSearch traced = new TracedRedisVectorSearch(jedis);
TracedSearchClient traced = new TracedSearchClient(searchClient);    // Azure Search
TracedPgVectorStore traced = new TracedPgVectorStore(connection);
TracedElasticsearchClient traced = new TracedElasticsearchClient(esClient);

// Framework integrations
TracedChatLanguageModel traced = new TracedChatLanguageModel(model, tracer, "openai");  // LangChain4j
TracedChatModel traced = new TracedChatModel(chatModel, tracer, "openai");              // Spring AI
TracedKernel traced = new TracedKernel(kernel, tracer);                                 // Semantic Kernel

Maven coordinates: com.github.future-agi.traceAI:traceai-java-<provider>:v1.0.0

C# uses manual tracing via FITracer. No auto-instrumentation wrappers yet - use the convenience methods (Llm(), Chain(), Agent(), Tool()) to create spans around your calls.

// Wrap any LLM call
var response = tracer.Llm("openai-call", span =>
{
    span.SetAttribute(SemanticConventions.GenAiRequestModel, "gpt-4o");
    span.SetInput(prompt);
    var result = CallOpenAI(prompt);
    span.SetOutput(result);
    span.SetTokenCounts(inputTokens, outputTokens, totalTokens);
    return result;
});

Install: dotnet add package fi-instrumentation-otel

Environment Variables

All languages read from the same set of environment variables:

VariablePurposeDefault
FI_API_KEYAuthenticationrequired
FI_SECRET_KEYAuthenticationrequired
FI_BASE_URLHTTP collector endpointhttps://api.futureagi.com
FI_GRPC_URLgRPC collector endpointhttps://grpc.futureagi.com
FI_PROJECT_NAMEDefault project nameNone
FI_PROJECT_VERSION_NAMEDefault versionNone
FI_HIDE_INPUTSRedact inputsFalse
FI_HIDE_OUTPUTSRedact outputsFalse
FI_HIDE_INPUT_MESSAGESRedact input messagesFalse
FI_HIDE_OUTPUT_MESSAGESRedact output messagesFalse
FI_HIDE_INPUT_IMAGESRedact input imagesFalse
FI_HIDE_INPUT_TEXTRedact input textFalse
FI_HIDE_OUTPUT_TEXTRedact output textFalse
FI_HIDE_EMBEDDING_VECTORSRedact embedding vectorsFalse
FI_HIDE_LLM_INVOCATION_PARAMETERSRedact model parametersFalse
FI_BASE64_IMAGE_MAX_LENGTHMax base64 image chars32000
FI_PII_REDACTIONAuto-mask PII (Python)False
Was this page helpful?

Questions & Discussion