Java SDK
Set up TraceAI for Java applications. Initialize the tracer, configure credentials, and instrument your LLM clients, vector databases, and frameworks.
TraceAI.init()orTraceAI.initFromEnvironment()to start- Every integration is a
Traced<X>wrapper around your existing client - Spans export to FutureAGI via OTLP HTTP, batched every 5 seconds
- Thread-local context (session, user, tags) applied to all spans in scope
- Distributed via JitPack (Maven/Gradle)
How it works
The Java SDK wraps your existing clients with Traced* classes. You initialize TraceAI once, then wrap each client you want to trace. The wrappers delegate every call to the original client and create OpenTelemetry spans around it - capturing inputs, outputs, token counts, latency, and errors.
// 1. Initialize once
TraceAI.init(TraceConfig.builder()
.baseUrl("https://api.futureagi.com")
.apiKey(System.getenv("FI_API_KEY"))
.secretKey(System.getenv("FI_SECRET_KEY"))
.projectName("my-project")
.build());
// 2. Wrap your client
OpenAIClient client = OpenAIOkHttpClient.builder()
.apiKey(System.getenv("OPENAI_API_KEY"))
.build();
TracedOpenAIClient traced = new TracedOpenAIClient(client);
// 3. Use it normally - spans are created automatically
ChatCompletion response = traced.createChatCompletion(params);
Installation
All Java SDK packages are distributed via JitPack. Add the JitPack repository to your build:
<repositories>
<repository>
<id>jitpack.io</id>
<url>https://jitpack.io</url>
</repository>
</repositories>repositories {
maven { url 'https://jitpack.io' }
} Then add the core dependency plus whichever integration you need:
<!-- Core (required) -->
<dependency>
<groupId>com.github.future-agi.traceAI</groupId>
<artifactId>traceai-java-core</artifactId>
<version>main-SNAPSHOT</version>
</dependency>
<!-- Pick your integration, e.g. OpenAI -->
<dependency>
<groupId>com.github.future-agi.traceAI</groupId>
<artifactId>traceai-java-openai</artifactId>
<version>main-SNAPSHOT</version>
</dependency>// Core (required)
implementation 'com.github.future-agi.traceAI:traceai-java-core:main-SNAPSHOT'
// Pick your integration, e.g. OpenAI
implementation 'com.github.future-agi.traceAI:traceai-java-openai:main-SNAPSHOT' Requirements: Java 17+
Initialization
From code
import ai.traceai.TraceAI;
import ai.traceai.TraceConfig;
TraceAI.init(TraceConfig.builder()
.baseUrl("https://api.futureagi.com")
.apiKey("your-fi-api-key")
.secretKey("your-fi-secret-key")
.projectName("my-project")
.build());
From environment variables
// Reads FI_BASE_URL, FI_API_KEY, FI_SECRET_KEY, FI_PROJECT_NAME
TraceAI.initFromEnvironment();
The builder falls back to environment variables for any field you don’t set explicitly. So you can mix both:
TraceAI.init(TraceConfig.builder()
.projectName("my-project") // explicit
.enableConsoleExporter(true) // explicit
// apiKey, secretKey, baseUrl read from env vars
.build());
Getting the tracer
After initialization, get the FITracer instance to pass to wrappers:
import ai.traceai.FITracer;
FITracer tracer = TraceAI.getTracer();
If you call getTracer() before init(), it throws IllegalStateException.
TraceConfig reference
| Builder method | Type | Default | What it does |
|---|---|---|---|
baseUrl(String) | String | $FI_BASE_URL | FutureAGI OTLP endpoint |
apiKey(String) | String | $FI_API_KEY | API key for authentication |
secretKey(String) | String | $FI_SECRET_KEY | Secret key for authentication |
projectName(String) | String | $FI_PROJECT_NAME | Project name in FutureAGI dashboard |
serviceName(String) | String | projectName | OpenTelemetry service.name resource attribute |
hideInputs(boolean) | boolean | false | Suppress all input values from spans |
hideOutputs(boolean) | boolean | false | Suppress all output values from spans |
hideInputMessages(boolean) | boolean | false | Suppress structured input messages |
hideOutputMessages(boolean) | boolean | false | Suppress structured output messages |
enableConsoleExporter(boolean) | boolean | false | Print spans to console for debugging |
batchSize(int) | int | 512 | Spans per export batch |
exportIntervalMs(long) | long | 5000 | How often to flush spans (ms) |
FITracer methods
FITracer is what the Traced* wrappers use internally. You can also use it for custom spans:
import ai.traceai.FISpanKind;
import io.opentelemetry.api.trace.Span;
import io.opentelemetry.context.Scope;
FITracer tracer = TraceAI.getTracer();
// Manual span
Span span = tracer.startSpan("my-operation", FISpanKind.CHAIN);
try (Scope scope = span.makeCurrent()) {
tracer.setInputValue(span, "input text");
// ... do work ...
tracer.setOutputValue(span, "output text");
span.setStatus(io.opentelemetry.api.trace.StatusCode.OK);
} catch (Exception e) {
tracer.setError(span, e);
throw e;
} finally {
span.end();
}
Or use the trace() helper for less boilerplate:
String result = tracer.trace("my-operation", FISpanKind.CHAIN, () -> {
return doSomething();
});
Available methods
| Method | What it does |
|---|---|
startSpan(name, kind) | Creates and starts a new span |
startSpan(name, kind, parentContext) | Creates a child span under a specific parent |
setInputValue(span, value) | Sets input.value attribute (respects hideInputs) |
setOutputValue(span, value) | Sets output.value attribute (respects hideOutputs) |
setRawInput(span, object) | Sets fi.raw_input as serialized JSON |
setRawOutput(span, object) | Sets fi.raw_output as serialized JSON |
setInputMessages(span, messages) | Sets structured input messages (role + content) |
setOutputMessages(span, messages) | Sets structured output messages (role + content) |
setTokenCounts(span, prompt, completion, total) | Sets token count attributes |
setError(span, throwable) | Records exception and sets ERROR status |
trace(name, kind, supplier) | Executes operation in a span, returns result |
trace(name, kind, runnable) | Executes void operation in a span |
message(role, content) | Helper to build message maps |
FISpanKind
Every span has a kind that identifies the type of AI operation:
| Kind | Used for |
|---|---|
LLM | Chat completions, text generation |
EMBEDDING | Text-to-vector conversions |
RETRIEVER | Vector search, document retrieval |
VECTOR_DB | Vector store writes (upsert, delete) |
RERANKER | Reranking retrieved documents |
CHAIN | Sequential pipeline steps |
AGENT | Autonomous agent operations |
TOOL | LLM tool/function calls |
GUARDRAIL | Safety and validation checks |
WORKFLOW | Custom pipeline steps |
EVALUATOR | Quality scoring |
CONVERSATION | Voice and conversational AI |
UNKNOWN | Unspecified |
Context attributes
Attach session IDs, user IDs, metadata, and tags to all spans created within a scope using thread-local context:
import ai.traceai.ContextAttributes;
try (var session = ContextAttributes.usingSession("session-123");
var user = ContextAttributes.usingUser("user-456");
var meta = ContextAttributes.usingMetadata(Map.of("env", "prod", "version", "2.1"));
var tags = ContextAttributes.usingTags(List.of("rag", "production"))) {
// Every span created here gets session.id, user.id, metadata, and tags
TracedOpenAIClient traced = new TracedOpenAIClient(client);
traced.createChatCompletion(params);
} catch (Exception e) {
throw new RuntimeException(e);
}
// Attributes are cleared when the try block exits
These are thread-local, so they work correctly in multi-threaded applications. Each thread maintains its own context.
Shutdown
TraceAI registers a JVM shutdown hook that flushes pending spans and shuts down the exporter. For most applications, you don’t need to do anything.
If you need to flush spans before the JVM exits (e.g., in a test or short-lived CLI tool):
TraceAI.shutdown();
This flushes all pending spans (up to 10 second timeout) and resets the tracer. After calling shutdown(), you can call init() again if needed.
Available integrations
Spring Boot
Auto-configuration via application.yml. No manual TraceAI.init() needed.
OpenAI
Chat completions, embeddings, streaming.
Anthropic
Messages API with reflection-based version compatibility.
AWS Bedrock
InvokeModel (raw JSON) and Converse (typed API).
Cohere
Chat, embeddings, and reranking.
Pinecone
Query, upsert, delete, fetch with namespace support.
More LLM Providers
Google GenAI, Vertex AI, Azure OpenAI, Ollama, Watsonx.
Vector Databases
Qdrant, Milvus, ChromaDB, Weaviate, MongoDB, Redis, pgvector, Azure AI Search, Elasticsearch.
Frameworks
LangChain4j and Semantic Kernel.