Tracing
Set up OpenTelemetry tracing across Python, TypeScript, Java, and C#. Auto-instrument 45+ frameworks or create custom spans with FITracer.
register()sets up the tracer provider in two lines, all languages- Auto-instrument with
traceai-*packages (45+ frameworks) or create custom spans withFITracer - Context helpers attach session, user, metadata, and tags to all spans in a block
- TraceConfig controls privacy masking, PII redaction covers 6 data types automatically
The pattern is the same across all four languages: call register() once to set up the provider, then either auto-instrument your frameworks or use FITracer for custom spans. LLM calls, retrieval steps, and agent actions get captured as OpenTelemetry spans and sent to your dashboard.
Note
Requires FI_API_KEY and FI_SECRET_KEY in your environment. For conceptual background on traces, spans, and attributes, see the Tracing guide.
Quick Example
pip install fi-instrumentation-otel traceai-openaifrom fi_instrumentation import register
from fi_instrumentation.fi_types import ProjectType
from traceai_openai import OpenAIInstrumentor
# 1. Register the tracer provider
trace_provider = register(
project_name="my-project",
project_type=ProjectType.OBSERVE,
)
# 2. Instrument your framework
OpenAIInstrumentor().instrument(tracer_provider=trace_provider)
# 3. Use OpenAI as normal - all calls are now traced
import openai
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "What is Python?"}],
) npm install @traceai/openai @traceai/fi-core @opentelemetry/instrumentationimport { register, ProjectType } from "@traceai/fi-core";
import { OpenAIInstrumentation } from "@traceai/openai";
import { registerInstrumentations } from "@opentelemetry/instrumentation";
import OpenAI from "openai";
const tracerProvider = register({
projectName: "my-project",
projectType: ProjectType.OBSERVE,
});
registerInstrumentations({
tracerProvider,
instrumentations: [new OpenAIInstrumentation()],
});
const openai = new OpenAI();
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "Hello!" }],
}); <!-- For Spring Boot apps -->
<dependency>
<groupId>com.github.future-agi.traceAI</groupId>
<artifactId>traceai-spring-boot-starter</artifactId>
<version>v1.0.0</version>
</dependency>
<dependency>
<groupId>com.github.future-agi.traceAI</groupId>
<artifactId>traceai-java-openai</artifactId>
<version>v1.0.0</version>
</dependency>import ai.traceai.TraceAI;
import ai.traceai.TraceConfig;
import ai.traceai.openai.TracedOpenAIClient;
// Initialize from environment variables
TraceAI.initFromEnvironment();
// Wrap your client
TracedOpenAIClient tracedClient = new TracedOpenAIClient(openAIClient);
var response = tracedClient.createChatCompletion(params);Set FI_API_KEY, FI_SECRET_KEY, FI_BASE_URL, and FI_PROJECT_NAME as environment variables.
dotnet add package fi-instrumentation-otelusing FIInstrumentation;
using FIInstrumentation.Types;
var tracer = TraceAI.Register(opts =>
{
opts.ProjectName = "my-project";
opts.ProjectType = ProjectType.Observe;
});
// Create traced LLM calls with convenience methods
var result = tracer.Llm("openai-call", span =>
{
span.SetInput("What is C#?");
var response = CallOpenAI("What is C#?");
span.SetOutput(response);
return response;
});
TraceAI.Shutdown(); register()
Creates an OpenTelemetry tracer provider configured to export spans to your Future AGI dashboard.
from fi_instrumentation import register
from fi_instrumentation.fi_types import ProjectType, Transport
trace_provider = register(
project_name="my-project",
project_type=ProjectType.OBSERVE,
transport=Transport.HTTP,
batch=True,
verbose=True,
)
| Parameter | Type | Default | Description |
|---|---|---|---|
project_name | str / None | FI_PROJECT_NAME env var | Project identifier in the dashboard |
project_type | ProjectType | EXPERIMENT | EXPERIMENT (dev, supports eval tags) or OBSERVE (production) |
project_version_name | str / None | None | Version label (EXPERIMENT only) |
eval_tags | list / None | None | Evaluation configs for automated span scoring (EXPERIMENT only) |
metadata | dict / None | None | Custom metadata attached to all spans |
batch | bool | True | True = BatchSpanProcessor, False = SimpleSpanProcessor |
set_global_tracer_provider | bool | False | Register as the global OpenTelemetry default |
headers | dict / None | None | Custom HTTP headers (auto-populated from API keys if not set) |
verbose | bool | True | Print configuration details on startup |
transport | Transport | HTTP | HTTP or GRPC |
semantic_convention | SemanticConvention | FI | Attribute naming convention |
Returns: TracerProvider - pass this to .instrument(tracer_provider=...) on any instrumentor.
import { register, ProjectType, Transport } from "@traceai/fi-core";
const tracerProvider = register({
projectName: "my-project",
projectType: ProjectType.OBSERVE,
transport: Transport.HTTP,
batch: true,
verbose: true,
});
| Parameter | Type | Default | Description |
|---|---|---|---|
projectName | string | FI_PROJECT_NAME env var | Project identifier |
projectType | ProjectType | EXPERIMENT | EXPERIMENT or OBSERVE |
projectVersionName | string | undefined | Version label (EXPERIMENT only) |
evalTags | EvalTag[] | undefined | Evaluation configs (EXPERIMENT only) |
sessionName | string | undefined | Session name (OBSERVE only) |
metadata | Record | undefined | Custom metadata |
batch | boolean | false | Use batch span processor |
setGlobalTracerProvider | boolean | true | Register as global provider |
headers | FIHeaders | undefined | Custom HTTP headers |
verbose | boolean | false | Verbose logging |
endpoint | string | FI_BASE_URL | Custom endpoint |
transport | Transport | HTTP | HTTP or GRPC |
Returns: FITracerProvider
import ai.traceai.TraceAI;
import ai.traceai.TraceConfig;
// Option 1: From environment variables
TraceAI.initFromEnvironment();
// Option 2: Programmatic configuration
TraceAI.init(TraceConfig.builder()
.baseUrl("https://api.futureagi.com")
.apiKey("your-api-key")
.secretKey("your-secret-key")
.projectName("my-project")
.batchSize(512)
.exportIntervalMs(5000)
.build()
);
FITracer tracer = TraceAI.getTracer();
| Builder method | Default | Description |
|---|---|---|
baseUrl(String) | FI_BASE_URL env var | Backend endpoint |
apiKey(String) | FI_API_KEY env var | API authentication |
secretKey(String) | FI_SECRET_KEY env var | Secondary authentication |
projectName(String) | FI_PROJECT_NAME env var | Project identifier |
serviceName(String) | project name | OpenTelemetry service name |
hideInputs(boolean) | false | Suppress input values |
hideOutputs(boolean) | false | Suppress output values |
hideInputMessages(boolean) | false | Suppress input messages |
hideOutputMessages(boolean) | false | Suppress output messages |
enableConsoleExporter(boolean) | false | Log spans to console |
batchSize(int) | 512 | Span batch size |
exportIntervalMs(long) | 5000 | Export interval in ms |
For Spring Boot, add the starter dependency and configure via application.yml:
traceai:
enabled: true
base-url: https://api.futureagi.com
api-key: ${FI_API_KEY}
secret-key: ${FI_SECRET_KEY}
project-name: my-app
batch-size: 512
export-interval-ms: 5000The FITracer bean is auto-created and available for injection.
using FIInstrumentation;
using FIInstrumentation.Types;
var tracer = TraceAI.Register(opts =>
{
opts.ProjectName = "my-project";
opts.ProjectType = ProjectType.Observe;
opts.Transport = Transport.Http;
opts.Batch = true;
opts.Verbose = true;
opts.TraceConfig = TraceConfig.Builder()
.HideInputs(false)
.HideOutputs(false)
.Build();
});
| Property | Type | Default | Description |
|---|---|---|---|
ProjectName | string | FI_PROJECT_NAME env var | Project identifier |
ProjectType | ProjectType | Experiment | Experiment or Observe |
ProjectVersionName | string | null | Version label (Experiment only) |
EvalTags | List<EvalTag> | null | Evaluation configs (Experiment only) |
Metadata | Dictionary | null | Custom metadata |
Batch | bool | true | Use batch span processor |
SetGlobalTracerProvider | bool | true | Register as global provider |
Transport | Transport | Http | Http or Grpc |
ApiKey | string | FI_API_KEY env var | API key |
SecretKey | string | FI_SECRET_KEY env var | Secret key |
TraceConfig | TraceConfig | null | Privacy/masking configuration |
EnableConsoleExporter | bool | false | Log spans to console |
Verbose | bool | true | Print config on startup |
Returns: FITracer - use for creating custom spans.
ProjectType
| Value | Use for |
|---|---|
EXPERIMENT | Development and testing. Supports eval tags and version names. |
OBSERVE | Production monitoring. No eval tags, no version names. |
SemanticConvention (Python/TypeScript)
Controls how span attributes are named. We recommend OTEL_GENAI for standard OpenTelemetry GenAI conventions.
| Value | Attribute prefix | Use for |
|---|---|---|
OTEL_GENAI | gen_ai.* | Recommended - OpenTelemetry GenAI standard |
FI | fi.* | Legacy Future AGI format (default) |
OPENINFERENCE | openinference.* | Arize Phoenix compatibility |
OPENLLMETRY | traceloop.* | Traceloop / OpenLLMetry compatibility |
Tip
Pass semantic_convention=SemanticConvention.OTEL_GENAI for the best interoperability with other OpenTelemetry tools.
FITracer - Custom Spans
Beyond auto-instrumentation, FITracer lets you create custom spans for your own logic - agent steps, chain stages, tool calls, or any operation you want to trace.
Span Kinds
All languages share the same span kinds:
| Kind | Use for |
|---|---|
LLM | Language model inference calls |
CHAIN | Sequential pipeline steps |
AGENT | Autonomous agent actions |
TOOL | Tool/function calls |
EMBEDDING | Vector generation |
RETRIEVER | Document retrieval (RAG) |
RERANKER | Re-ranking operations |
GUARDRAIL | Safety/validation checks |
EVALUATOR | Quality scoring |
UNKNOWN | Unspecified or unexpected span type |
WORKFLOW | Custom pipeline steps (Java only) |
CONVERSATION | Voice/conversational AI (Java/C#) |
VECTOR_DB | Vector database operations (Java/C#) |
Decorators and Convenience Methods
Python’s FITracer provides decorators for clean span creation:
from fi_instrumentation import register
from fi_instrumentation.fi_types import ProjectType
trace_provider = register(
project_name="my-project",
project_type=ProjectType.OBSERVE,
)
tracer = trace_provider.get_tracer(__name__)
# Use the FITracer wrapper for decorators
from fi_instrumentation import FITracer
fi_tracer = FITracer(tracer)
@fi_tracer.agent(name="research-agent")
def research_agent(query):
# This entire function becomes an AGENT span
results = search(query)
return summarize(results)
@fi_tracer.chain(name="rag-pipeline")
def rag_pipeline(question):
docs = retrieve(question)
return generate(question, docs)
@fi_tracer.tool(
name="web-search",
description="Searches the web",
parameters={"query": {"type": "string"}}
)
def web_search(query):
return requests.get(f"https://api.search.com?q={query}").json()You can also use context managers for manual span creation:
from fi_instrumentation.fi_types import FiSpanKindValues
with fi_tracer.start_as_current_span(
"llm-call",
fi_span_kind=FiSpanKindValues.LLM,
) as span:
span.set_input(value="What is Python?")
response = call_llm("What is Python?")
span.set_output(value=response)
span.set_attributes({
"gen_ai.request.model": "gpt-4o",
"gen_ai.usage.input_tokens": 10,
"gen_ai.usage.output_tokens": 150,
}) TypeScript uses OpenTelemetry’s standard startActiveSpan pattern:
import { trace } from "@opentelemetry/api";
const tracer = trace.getTracer("my-app");
// Manual span creation
tracer.startActiveSpan("rag-pipeline", (span) => {
span.setAttribute("gen_ai.span.kind", "CHAIN");
span.setAttribute("input.value", question);
const docs = retrieve(question);
const result = generate(question, docs);
span.setAttribute("output.value", result);
span.end();
return result;
});Context management functions let you set session, user, and metadata:
import {
setSession, setUser, setMetadata, setTags,
getAttributesFromContext
} from "@traceai/fi-core";
import { context } from "@opentelemetry/api";
const ctx = setSession(context.active(), { sessionId: "sess-123" });
const ctx2 = setUser(ctx, { userId: "user-456" });
context.with(ctx2, () => {
// All spans created here inherit session and user
tracer.startActiveSpan("operation", (span) => {
// span automatically gets session.id and user.id
span.end();
});
}); Java offers both lambda-based and manual span creation:
import ai.traceai.FITracer;
import ai.traceai.FISpanKind;
FITracer tracer = TraceAI.getTracer();
// Lambda-based - auto-manages span lifecycle
String result = tracer.trace("rag-pipeline", FISpanKind.CHAIN, (span) -> {
tracer.setInputValue(span, question);
String docs = tracer.trace("retrieve", FISpanKind.RETRIEVER, (rSpan) -> {
tracer.setInputValue(rSpan, question);
var retrieved = vectorDb.search(question);
tracer.setOutputValue(rSpan, tracer.toJson(retrieved));
return retrieved;
});
String answer = tracer.trace("generate", FISpanKind.LLM, (lSpan) -> {
tracer.setInputMessages(lSpan, List.of(
tracer.message("system", "Answer using the context."),
tracer.message("user", question)
));
var resp = llm.generate(question, docs);
tracer.setOutputMessages(lSpan, List.of(
tracer.message("assistant", resp)
));
tracer.setTokenCounts(lSpan, 50, 200, 250);
return resp;
});
tracer.setOutputValue(span, answer);
return answer;
});Manual span creation for more control:
import io.opentelemetry.api.trace.Span;
import io.opentelemetry.context.Context;
Span span = tracer.startSpan("tool-call", FISpanKind.TOOL);
try {
tracer.setInputValue(span, inputJson);
String result = executeTool(inputJson);
tracer.setOutputValue(span, result);
span.setStatus(StatusCode.OK);
} catch (Exception e) {
tracer.setError(span, e);
} finally {
span.end();
} C# provides typed convenience methods for each span kind:
var tracer = TraceAI.Register(opts =>
{
opts.ProjectName = "my-project";
opts.ProjectType = ProjectType.Observe;
});
// Convenience methods for each span kind
var result = tracer.Chain("rag-pipeline", span =>
{
span.SetInput("What is quantum computing?");
var docs = tracer.Tool("vector-search", toolSpan =>
{
toolSpan.SetTool("search", "Searches vector DB");
toolSpan.SetInput("quantum computing");
var results = vectorDb.Search("quantum computing");
toolSpan.SetOutput(results);
return results;
});
var answer = tracer.Llm("generate", llmSpan =>
{
llmSpan.SetAttribute(SemanticConventions.GenAiRequestModel, "gpt-4o");
llmSpan.SetInputMessages(new List<Dictionary<string, string>>
{
FITracer.Message("user", "What is quantum computing?")
});
var resp = llm.Generate("What is quantum computing?", docs);
llmSpan.SetOutputMessages(new List<Dictionary<string, string>>
{
FITracer.Message("assistant", resp)
});
llmSpan.SetTokenCounts(50, 200, 250);
return resp;
});
span.SetOutput(answer);
return answer;
});
// Async variants
await tracer.AgentAsync("research-agent", async span =>
{
span.SetInput("Research topic X");
var result = await RunResearchAsync("topic X");
span.SetOutput(result);
});Manual span creation:
using var span = tracer.StartSpan("custom-op", FISpanKind.Chain);
span.SetInput("input data");
span.SetOutput("output data");
// span.Dispose() ends the span automatically FISpan Methods
All languages provide methods on the span object for setting structured data:
| Method | Description | Available in |
|---|---|---|
set_input(value, mime_type=) / SetInput(value, mimeType) | Set span input value (text or JSON). mime_type accepts "text/plain" or "application/json" | Python, C# |
set_output(value, mime_type=) / SetOutput(value, mimeType) | Set span output value | Python, C# |
set_tool(name, description, parameters) / SetTool(...) | Attach tool metadata | Python, C# |
set_attributes(dict) / SetAttribute(key, value) | Set custom attributes | All |
setInputValue(span, value) | Set input on span | Java |
setOutputValue(span, value) | Set output on span | Java |
setInputMessages(span, messages) / SetInputMessages(messages) | Set chat message history | Java, C# |
setOutputMessages(span, messages) / SetOutputMessages(messages) | Set response messages | Java, C# |
setTokenCounts(span, in, out, total) / SetTokenCounts(in, out, total) | Set token usage | Java, C# |
setError(span, exception) / SetError(exception) | Record an exception | Java, C# |
Note
In Java, these methods live on FITracer and take the span as the first argument (e.g. tracer.setInputValue(span, value)). In Python and C#, they’re called directly on the span object.
Context Helpers
Attach metadata, tags, session IDs, and user IDs to spans. These apply to all spans created within the scope.
from fi_instrumentation import (
using_session, using_user, using_metadata,
using_tags, using_prompt_template, using_attributes,
suppress_tracing
)
# Individual context managers
with using_session("session-abc-123"):
with using_user("user-456"):
response = client.chat.completions.create(...)
with using_metadata({"environment": "production", "version": "2.1"}):
response = client.chat.completions.create(...)
with using_tags(["rag-pipeline", "v2"]):
response = client.chat.completions.create(...)
# Prompt template tracking
with using_prompt_template(
template="Answer {question} using {context}",
label="production",
version="v1.2",
variables={"question": "...", "context": "..."}
):
response = client.chat.completions.create(...)
# Combined - set everything at once
with using_attributes(
session_id="session-abc",
user_id="user-456",
metadata={"env": "prod"},
tags=["rag", "v2"],
prompt_template="Answer {question}",
prompt_template_version="v1.2",
):
response = client.chat.completions.create(...)
# Suppress tracing for a block
with suppress_tracing():
# These calls won't be traced
result = client.chat.completions.create(...) import {
setSession, getSession, clearSession,
setUser, getUser, clearUser,
setMetadata, setTags,
setPromptTemplate,
getAttributesFromContext
} from "@traceai/fi-core";
import { context } from "@opentelemetry/api";
// Build up context with multiple attributes
let ctx = context.active();
ctx = setSession(ctx, { sessionId: "session-abc-123" });
ctx = setUser(ctx, { userId: "user-456" });
ctx = setMetadata(ctx, { environment: "production" });
ctx = setTags(ctx, ["rag-pipeline", "v2"]);
ctx = setPromptTemplate(ctx, {
template: "Answer {{question}} using {{context}}",
variables: { question: "...", context: "..." },
version: "v1.2",
});
// All spans created in this context inherit these attributes
context.with(ctx, async () => {
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "Hello" }],
});
});
// Read attributes back from context
const attrs = getAttributesFromContext(ctx); Java uses AutoCloseable scopes with try-with-resources:
import ai.traceai.ContextAttributes;
// Session tracking
try (var ignored = ContextAttributes.usingSession("session-abc-123")) {
// All spans here get session.id and gen_ai.conversation.id
var response = tracedClient.createChatCompletion(params);
}
// User tracking
try (var ignored = ContextAttributes.usingUser("user-456")) {
var response = tracedClient.createChatCompletion(params);
}
// Metadata
try (var ignored = ContextAttributes.usingMetadata(Map.of(
"environment", "production",
"version", "2.1"
))) {
var response = tracedClient.createChatCompletion(params);
}
// Tags
try (var ignored = ContextAttributes.usingTags(List.of("rag-pipeline", "v2"))) {
var response = tracedClient.createChatCompletion(params);
}
// Nest them for combined context
try (var s = ContextAttributes.usingSession("session-abc");
var u = ContextAttributes.usingUser("user-456");
var m = ContextAttributes.usingMetadata(Map.of("env", "prod"))) {
var response = tracedClient.createChatCompletion(params);
}
// Read current attributes
Map<String, Object> attrs = ContextAttributes.getAttributesFromContext(); C# uses IDisposable scopes with using statements:
using FIInstrumentation.Context;
// Session and user tracking
using (ContextAttributes.UsingSession("session-abc-123"))
using (ContextAttributes.UsingUser("user-456"))
{
tracer.Llm("llm-call", span =>
{
// span automatically gets session.id and user.id
span.SetInput("Hello!");
});
}
// Metadata and tags
using (ContextAttributes.UsingMetadata(new Dictionary<string, object>
{
["environment"] = "production",
["version"] = "2.1"
}))
using (ContextAttributes.UsingTags(new List<string> { "rag-pipeline", "v2" }))
{
tracer.Chain("pipeline", span => { /* ... */ });
}
// Prompt template tracking
using (ContextAttributes.UsingPromptTemplate(
template: "Answer {question} using {context}",
label: "production",
version: "v1.2",
variables: new Dictionary<string, object>
{
["question"] = "...",
["context"] = "..."
}
))
{
tracer.Llm("templated-call", span => { /* ... */ });
}
// Combined - set everything at once
using (ContextAttributes.UsingAttributes(
sessionId: "session-abc",
userId: "user-456",
metadata: new Dictionary<string, object> { ["env"] = "prod" },
tags: new List<string> { "rag", "v2" }
))
{
tracer.Chain("full-context", span => { /* ... */ });
} Suppress Tracing
Temporarily disable tracing for a block of code. Useful for health checks, internal calls, or operations you don’t want in your traces. Available in Python and C# only - Java and TypeScript don’t have this API.
from fi_instrumentation import suppress_tracing
with suppress_tracing():
# Nothing in this block is traced
result = client.chat.completions.create(...) using FIInstrumentation.Context;
using (new SuppressTracing())
{
// Nothing in this block is traced
} TraceConfig
Control what data gets captured. Useful for privacy compliance, reducing payload size, or masking sensitive data.
from fi_instrumentation import TraceConfig
config = TraceConfig(
hide_inputs=True,
hide_outputs=True,
pii_redaction=True,
)
# Pass to instrumentors
OpenAIInstrumentor().instrument(
tracer_provider=trace_provider,
config=config,
) TraceAI.init(TraceConfig.builder()
.baseUrl("https://api.futureagi.com")
.apiKey("your-key")
.projectName("my-project")
.hideInputs(true)
.hideOutputs(true)
.hideInputMessages(true)
.hideOutputMessages(true)
.build()
); In TypeScript, TraceConfig is passed per-instrumentor, not to register():
import { OpenAIInstrumentation } from "@traceai/openai";
import { registerInstrumentations } from "@opentelemetry/instrumentation";
registerInstrumentations({
tracerProvider,
instrumentations: [
new OpenAIInstrumentation({
traceConfig: {
hideInputs: true,
hideOutputs: true,
hideInputImages: true,
hideEmbeddingVectors: true,
base64ImageMaxLength: 16000,
piiRedaction: true,
},
}),
],
}); var tracer = TraceAI.Register(opts =>
{
opts.ProjectName = "my-project";
opts.TraceConfig = TraceConfig.Builder()
.HideInputs(true)
.HideOutputs(true)
.HideInputImages(true)
.HideEmbeddingVectors(true)
.Base64ImageMaxLength(16000)
.Build();
}); | Field | Type | Default | What it hides |
|---|---|---|---|
hide_inputs | bool | False | All input values and messages |
hide_outputs | bool | False | All output values and messages |
hide_input_messages | bool | False | Input messages only |
hide_output_messages | bool | False | Output messages only |
hide_input_images | bool | False | Images in inputs |
hide_input_text | bool | False | Text in input messages |
hide_output_text | bool | False | Text in output messages |
hide_embedding_vectors | bool | False | Embedding vectors |
hide_llm_invocation_parameters | bool | False | Model parameters (temperature, etc.) |
base64_image_max_length | int | 32000 | Truncate base64 images beyond this length |
pii_redaction | bool | False | Automatically mask PII (Python only) |
Each field maps to an environment variable with the FI_ prefix (e.g. hide_inputs -> FI_HIDE_INPUTS).
PII Redaction (Python)
When pii_redaction=True, the SDK automatically detects and masks 6 types of personally identifiable information:
| PII Type | Pattern | Replaced with |
|---|---|---|
| Email addresses | user@example.com | <EMAIL_ADDRESS> |
| Social Security Numbers | 123-45-6789 | <SSN> |
| Credit card numbers | 4111-1111-1111-1111 | <CREDIT_CARD> |
| API keys | sk_live_..., pk_test_... | <API_KEY> |
| IP addresses (IPv4) | 192.168.1.1 | <IP_ADDRESS> |
| Phone numbers | +1-555-123-4567 | <PHONE_NUMBER> |
# Enable via code
config = TraceConfig(pii_redaction=True)
# Or via environment variable
# export FI_PII_REDACTION=true
# Direct usage
from fi_instrumentation.instrumentation.pii_redaction import redact_pii_in_string
redacted = redact_pii_in_string("Email me at test@example.com")
# "Email me at <EMAIL_ADDRESS>"
EvalTags - Attach Evaluations to Traces
EvalTags let you configure automatic evaluations that run server-side on your traced spans. Attach them during register() and the platform scores spans as they arrive.
from fi_instrumentation import register
from fi_instrumentation.fi_types import (
ProjectType, EvalTag, EvalTagType,
EvalSpanKind, EvalName, ModelChoices
)
trace_provider = register(
project_name="my-project",
project_type=ProjectType.EXPERIMENT,
project_version_name="v1.0",
eval_tags=[
EvalTag(
type=EvalTagType.OBSERVATION_SPAN,
value=EvalSpanKind.LLM,
eval_name=EvalName.GROUNDEDNESS,
model=ModelChoices.TURING_FLASH,
),
EvalTag(
type=EvalTagType.OBSERVATION_SPAN,
value=EvalSpanKind.LLM,
eval_name=EvalName.TOXICITY,
model=ModelChoices.TURING_FLASH,
),
],
) import {
register, ProjectType, EvalTag,
EvalTagType, EvalSpanKind, EvalName, ModelChoices
} from "@traceai/fi-core";
const tracerProvider = register({
projectName: "my-project",
projectType: ProjectType.EXPERIMENT,
projectVersionName: "v1.0",
evalTags: [
await EvalTag.create({
type: EvalTagType.OBSERVATION_SPAN,
value: EvalSpanKind.LLM,
eval_name: EvalName.GROUNDEDNESS,
model: ModelChoices.TURING_FLASH,
}),
await EvalTag.create({
type: EvalTagType.OBSERVATION_SPAN,
value: EvalSpanKind.LLM,
eval_name: EvalName.TOXICITY,
model: ModelChoices.TURING_FLASH,
}),
],
});Note
EvalTag.create() is async in TypeScript because it validates the eval configuration with the server.
using FIInstrumentation;
using FIInstrumentation.Types;
var tracer = TraceAI.Register(opts =>
{
opts.ProjectName = "my-project";
opts.ProjectType = ProjectType.Experiment;
opts.ProjectVersionName = "v1.0";
opts.EvalTags = new List<EvalTag>
{
new EvalTag(EvalSpanKind.Llm, EvalName.Groundedness)
{
Model = ModelChoices.TuringFlash,
},
new EvalTag(EvalSpanKind.Llm, EvalName.Toxicity)
{
Model = ModelChoices.TuringFlash,
},
};
}); EvalSpanKind
Which span types to evaluate:
| Value | Description |
|---|---|
LLM | Language model calls |
RETRIEVER | Document retrieval spans |
TOOL | Tool/function calls |
AGENT | Agent spans |
EMBEDDING | Embedding generation |
RERANKER | Re-ranking operations |
ModelChoices
Which evaluation model to use:
| Value | Description |
|---|---|
TURING_FLASH | Fast evaluation model |
TURING_SMALL | Small evaluation model |
TURING_LARGE | High-accuracy evaluation model |
PROTECT | Safety-focused model |
PROTECT_FLASH | Fast safety model |
Note
EvalTags only work with ProjectType.EXPERIMENT. For production monitoring without evals, use ProjectType.OBSERVE.
Instrumentors
Each framework has its own instrumentor package. Install the one for your framework and call .instrument().
# Pattern is the same for every framework:
from traceai_<framework> import <Framework>Instrumentor
<Framework>Instrumentor().instrument(tracer_provider=trace_provider)
| Package | Framework | Instrumentor class |
|---|---|---|
traceai-openai | OpenAI | OpenAIInstrumentor |
traceai-anthropic | Anthropic | AnthropicInstrumentor |
traceai-google-genai | Google GenAI | GoogleGenAIInstrumentor |
traceai-vertexai | Vertex AI | VertexAIInstrumentor |
traceai-bedrock | AWS Bedrock | BedrockInstrumentor |
traceai-mistralai | Mistral AI | MistralAIInstrumentor |
traceai-groq | Groq | GroqInstrumentor |
traceai-litellm | LiteLLM | LiteLLMInstrumentor |
traceai-cohere | Cohere | CohereInstrumentor |
traceai-ollama | Ollama | OllamaInstrumentor |
traceai-deepseek | DeepSeek | DeepSeekInstrumentor |
traceai-together | Together AI | TogetherInstrumentor |
traceai-fireworks | Fireworks AI | FireworksInstrumentor |
traceai-cerebras | Cerebras | CerebrasInstrumentor |
traceai-xai | xAI / Grok | XAIInstrumentor |
traceai-vllm | vLLM | VLLMInstrumentor |
traceai-portkey | Portkey | PortkeyInstrumentor |
traceai-huggingface | HuggingFace | HuggingFaceInstrumentor |
| Package | Framework | Instrumentor class |
|---|---|---|
traceai-langchain | LangChain / LangGraph | LangChainInstrumentor |
traceai-llamaindex | LlamaIndex | LlamaIndexInstrumentor |
traceai-crewai | CrewAI | CrewAIInstrumentor |
traceai-openai-agents | OpenAI Agents SDK | OpenAIAgentsInstrumentor |
traceai-autogen | Microsoft AutoGen | AutoGenInstrumentor |
traceai-smolagents | HuggingFace SmolAgents | SmolAgentsInstrumentor |
traceai-google-adk | Google Agent Dev Kit | GoogleADKInstrumentor |
traceai-claude-agent-sdk | Claude Agent SDK | ClaudeAgentSDKInstrumentor |
traceai-pydantic-ai | Pydantic AI | PydanticAIInstrumentor |
traceai-strands | AWS Strands Agents | StrandsInstrumentor |
traceai-agno | Agno | AgnoInstrumentor |
traceai-beeai | IBM BeeAI | BeeAIInstrumentor |
traceai-haystack | Haystack | HaystackInstrumentor |
traceai-dspy | DSPy | DSPyInstrumentor |
traceai-guardrails | Guardrails AI | GuardrailsInstrumentor |
traceai-instructor | Instructor | InstructorInstrumentor |
traceai-mcp | Model Context Protocol | MCPInstrumentor |
| Package | Framework | Instrumentor class |
|---|---|---|
traceai-pipecat | Pipecat | PipecatInstrumentor |
traceai-livekit | LiveKit | LiveKitInstrumentor |
| Package | Framework | Instrumentor class |
|---|---|---|
traceai-pinecone | Pinecone | PineconeInstrumentor |
traceai-chromadb | ChromaDB | ChromaDBInstrumentor |
traceai-qdrant | Qdrant | QdrantInstrumentor |
traceai-weaviate | Weaviate | WeaviateInstrumentor |
traceai-milvus | Milvus | MilvusInstrumentor |
traceai-lancedb | LanceDB | LanceDBInstrumentor |
traceai-mongodb | MongoDB | MongoDBInstrumentor |
traceai-pgvector | pgvector | PgVectorInstrumentor |
traceai-redis | Redis | RedisInstrumentor |
Cleanup
To remove instrumentation (useful in tests or serverless cleanup):
OpenAIInstrumentor().uninstrument() TraceAI.shutdown(); // Flushes remaining spans and shuts down TraceAI.Shutdown(); // Flushes remaining spans and shuts down For per-framework setup guides with full examples, see the Auto-Instrumentation docs.
Other Languages
The tables above show Python packages. TypeScript, Java, and C# have their own instrumentation libraries:
TypeScript packages follow the @traceai/<framework> pattern. All use OpenTelemetry’s registerInstrumentations().
import { registerInstrumentations } from "@opentelemetry/instrumentation";
import { OpenAIInstrumentation } from "@traceai/openai";
import { AnthropicInstrumentation } from "@traceai/anthropic";
import { LangChainInstrumentation } from "@traceai/langchain";
import { PineconeInstrumentation } from "@traceai/pinecone";
registerInstrumentations({
tracerProvider,
instrumentations: [
new OpenAIInstrumentation(),
new AnthropicInstrumentation(),
new LangChainInstrumentation(),
new PineconeInstrumentation(),
],
});40+ packages available including all LLM providers, frameworks, and vector DBs from the Python list, plus @traceai/vercel for Vercel/Next.js and @traceai/mastra.
Java uses the Traced* wrapper pattern. Each integration wraps the native client:
// LLM Providers
TracedOpenAIClient traced = new TracedOpenAIClient(openAIClient);
TracedAnthropicClient traced = new TracedAnthropicClient(anthropicClient);
TracedBedrockRuntimeClient traced = new TracedBedrockRuntimeClient(bedrockClient);
TracedGenerativeModel traced = new TracedGenerativeModel(model); // Google GenAI
TracedOllamaAPI traced = new TracedOllamaAPI(ollamaAPI);
TracedCohereClient traced = new TracedCohereClient(cohereClient);
TracedWatsonxAI traced = new TracedWatsonxAI(watsonxClient);
// Vector Databases
TracedPineconeIndex traced = new TracedPineconeIndex(index, "my-index");
TracedQdrantClient traced = new TracedQdrantClient(qdrantClient);
TracedMilvusClient traced = new TracedMilvusClient(milvusClient);
TracedChromaCollection traced = new TracedChromaCollection(collection);
TracedMongoVectorSearch traced = new TracedMongoVectorSearch(collection);
TracedRedisVectorSearch traced = new TracedRedisVectorSearch(jedis);
TracedSearchClient traced = new TracedSearchClient(searchClient); // Azure Search
TracedPgVectorStore traced = new TracedPgVectorStore(connection);
TracedElasticsearchClient traced = new TracedElasticsearchClient(esClient);
// Framework integrations
TracedChatLanguageModel traced = new TracedChatLanguageModel(model, tracer, "openai"); // LangChain4j
TracedChatModel traced = new TracedChatModel(chatModel, tracer, "openai"); // Spring AI
TracedKernel traced = new TracedKernel(kernel, tracer); // Semantic KernelMaven coordinates: com.github.future-agi.traceAI:traceai-java-<provider>:v1.0.0
C# uses manual tracing via FITracer. No auto-instrumentation wrappers yet - use the convenience methods (Llm(), Chain(), Agent(), Tool()) to create spans around your calls.
// Wrap any LLM call
var response = tracer.Llm("openai-call", span =>
{
span.SetAttribute(SemanticConventions.GenAiRequestModel, "gpt-4o");
span.SetInput(prompt);
var result = CallOpenAI(prompt);
span.SetOutput(result);
span.SetTokenCounts(inputTokens, outputTokens, totalTokens);
return result;
});Install: dotnet add package fi-instrumentation-otel
Environment Variables
All languages read from the same set of environment variables:
| Variable | Purpose | Default |
|---|---|---|
FI_API_KEY | Authentication | required |
FI_SECRET_KEY | Authentication | required |
FI_BASE_URL | HTTP collector endpoint | https://api.futureagi.com |
FI_GRPC_URL | gRPC collector endpoint | https://grpc.futureagi.com |
FI_PROJECT_NAME | Default project name | None |
FI_PROJECT_VERSION_NAME | Default version | None |
FI_HIDE_INPUTS | Redact inputs | False |
FI_HIDE_OUTPUTS | Redact outputs | False |
FI_HIDE_INPUT_MESSAGES | Redact input messages | False |
FI_HIDE_OUTPUT_MESSAGES | Redact output messages | False |
FI_HIDE_INPUT_IMAGES | Redact input images | False |
FI_HIDE_INPUT_TEXT | Redact input text | False |
FI_HIDE_OUTPUT_TEXT | Redact output text | False |
FI_HIDE_EMBEDDING_VECTORS | Redact embedding vectors | False |
FI_HIDE_LLM_INVOCATION_PARAMETERS | Redact model parameters | False |
FI_BASE64_IMAGE_MAX_LENGTH | Max base64 image chars | 32000 |
FI_PII_REDACTION | Auto-mask PII (Python) | False |
Related
Tracing Guide
Concepts, manual tracing, and per-framework setup guides.
Auto-Instrumentation
Setup guides for all 45+ supported frameworks.
Evaluations
Score traced outputs with 76+ metrics.
Datasets
Store test data and run batch evaluations.
Protect
Guard inputs and outputs with safety rules.
Simulation Testing
Test voice AI agents with simulated personas.