Spring Boot

Add tracing to Spring Boot apps with Spring AI. Configure application.yml, wrap your ChatModel and EmbeddingModel, and traces are collected automatically.

📝
TL;DR
  • traceai-spring-boot-starter auto-configures FITracer from application.yml
  • Wrap ChatModel with TracedChatModel, EmbeddingModel with TracedEmbeddingModel
  • Captures messages, token counts, model info, latency, and errors
  • Streaming support built in - works with Flux<ChatResponse>
  • Distributed via JitPack (no Maven Central publish yet)

How it works

traceai-spring-boot-starter is the Spring Boot auto-configuration for TraceAI. When you add it to your project:

  1. TraceAIAutoConfiguration reads your traceai.* properties and creates an FITracer bean
  2. You wrap your Spring AI models with TracedChatModel or TracedEmbeddingModel
  3. Every call and stream through those wrappers creates an OpenTelemetry span with LLM metadata attached

The wrappers delegate to the underlying model and add span instrumentation around each call. You pick which models get traced by wrapping them explicitly - the starter doesn’t auto-wrap beans because that could break apps with multiple providers or custom bean ordering.

1. Add dependencies

Add the JitPack repository and the starter to your pom.xml. This assumes you’re using the Spring Boot parent POM:

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>3.2.1</version>
</parent>

<properties>
    <java.version>17</java.version>
    <spring-ai.version>1.0.0-M4</spring-ai.version>
</properties>

<repositories>
    <!-- Spring Milestones (for spring-ai milestone releases) -->
    <repository>
        <id>spring-milestones</id>
        <url>https://repo.spring.io/milestone</url>
    </repository>

    <!-- JitPack - pulls TraceAI directly from GitHub -->
    <repository>
        <id>jitpack.io</id>
        <url>https://jitpack.io</url>
    </repository>
</repositories>

<dependencies>
    <!-- Spring Boot Web -->
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>

    <!-- TraceAI Spring Boot Starter -->
    <dependency>
        <groupId>com.github.future-agi.traceAI</groupId>
        <artifactId>traceai-spring-boot-starter</artifactId>
        <version>main-SNAPSHOT</version>
    </dependency>

    <!-- Spring AI - pick your provider -->
    <dependency>
        <groupId>org.springframework.ai</groupId>
        <artifactId>spring-ai-openai-spring-boot-starter</artifactId>
        <version>${spring-ai.version}</version>
    </dependency>
</dependencies>

For Gradle:

ext {
    springAiVersion = '1.0.0-M4'
}

repositories {
    maven { url 'https://repo.spring.io/milestone' }
    maven { url 'https://jitpack.io' }
}

dependencies {
    implementation 'org.springframework.boot:spring-boot-starter-web'
    implementation 'com.github.future-agi.traceAI:traceai-spring-boot-starter:main-SNAPSHOT'
    implementation "org.springframework.ai:spring-ai-openai-spring-boot-starter:${springAiVersion}"
}

Requirements: Java 17+, Spring Boot 3.2+, Spring AI 1.0.0-M4+


2. Configure application.yml

spring:
  application:
    name: my-spring-ai-app
  ai:
    openai:
      api-key: ${OPENAI_API_KEY}
      chat:
        options:
          model: gpt-4o-mini
          temperature: 0.7

traceai:
  enabled: true
  base-url: https://api.futureagi.com
  api-key: ${FI_API_KEY}
  secret-key: ${FI_SECRET_KEY}
  project-name: my-spring-ai-app

All configuration properties

PropertyTypeDefaultWhat it does
traceai.enabledbooleantrueDisables all TraceAI instrumentation when set to false
traceai.base-urlstring-FutureAGI API endpoint
traceai.api-keystring-Your FI_API_KEY
traceai.secret-keystring-Your FI_SECRET_KEY
traceai.project-namestring-Project name in FutureAGI dashboard
traceai.service-namestringspring.application.nameService name in traces (falls back to app name)
traceai.hide-inputsbooleanfalseRedact all input values from spans
traceai.hide-outputsbooleanfalseRedact all output values from spans
traceai.hide-input-messagesbooleanfalseRedact input messages specifically
traceai.hide-output-messagesbooleanfalseRedact output messages specifically
traceai.enable-console-exporterbooleanfalsePrint spans to console (useful for debugging)
traceai.batch-sizeint512Spans per export batch
traceai.export-interval-mslong5000How often to flush spans (ms)

3. Wrap your models

The starter auto-creates the FITracer bean. You just need to wrap your Spring AI models.

Chat model

import ai.traceai.FITracer;
import ai.traceai.spring.TracedChatModel;
import org.springframework.ai.chat.model.ChatModel;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class TraceAIConfig {

    @Bean
    public TracedChatModel tracedChatModel(ChatModel chatModel, FITracer tracer) {
        // "openai" = provider name, used in span attributes
        return new TracedChatModel(chatModel, tracer, "openai");
    }
}

TracedChatModel implements ChatModel, so you can inject it anywhere you’d use a regular ChatModel.

Embedding model

Add this to the same @Configuration class:

import ai.traceai.spring.TracedEmbeddingModel;
import org.springframework.ai.embedding.EmbeddingModel;

@Bean
public TracedEmbeddingModel tracedEmbeddingModel(EmbeddingModel embeddingModel, FITracer tracer) {
    return new TracedEmbeddingModel(embeddingModel, tracer, "openai");
}

Using the global tracer

Both wrappers have a two-arg constructor that uses the global tracer instead of injecting FITracer. This only works after the auto-configuration has run (i.e., inside Spring-managed beans, not in static initializers or tests):

// Uses TraceAI.getTracer() internally - requires TraceAI.init() to have been called
TracedChatModel traced = new TracedChatModel(chatModel, "openai");
TracedEmbeddingModel tracedEmbed = new TracedEmbeddingModel(embeddingModel, "openai");

4. Use it

Once wrapped, use your models normally. Tracing is automatic.

Basic chat

import ai.traceai.spring.TracedChatModel;
import org.springframework.ai.chat.prompt.Prompt;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;

@RestController
@RequestMapping("/chat")
public class ChatController {

    private final TracedChatModel chatModel;

    @Autowired
    public ChatController(TracedChatModel chatModel) {
        this.chatModel = chatModel;
    }

    @GetMapping
    public String chat(@RequestParam String message) {
        var response = chatModel.call(new Prompt(message));
        return response.getResult().getOutput().getContent();
    }

    @PostMapping
    public String chatPost(@RequestBody ChatRequest request) {
        var response = chatModel.call(new Prompt(request.message()));
        return response.getResult().getOutput().getContent();
    }

    record ChatRequest(String message) {}
}

Streaming

Streaming requires spring-boot-starter-webflux on the classpath alongside spring-boot-starter-web.

import org.springframework.ai.chat.prompt.Prompt;
import reactor.core.publisher.Flux;

@GetMapping(value = "/stream", produces = "text/event-stream")
public Flux<String> stream(@RequestParam String message) {
    return chatModel.stream(new Prompt(message))
        .map(response -> response.getResult().getOutput().getContent());
}

The streaming wrapper accumulates chunks and records the full output in the span when the stream completes.


What gets captured

Every TracedChatModel.call() creates a span with:

AttributeExample value
llm.systemspring-ai
llm.provideropenai
llm.request.modelgpt-4o-mini
llm.response.modelgpt-4o-mini-2024-07-18
llm.request.temperature0.7
llm.request.top_p1.0
llm.token_count.prompt15
llm.token_count.completion42
llm.token_count.total57
input.valueFull prompt text
output.valueFull response text
Input/output messagesStructured role + content pairs

TracedEmbeddingModel.call() spans capture the same llm.system, llm.provider, and model attributes, plus embedding-specific ones: embedding.vector_count, embedding.dimensions, embedding.model_name, and token counts (llm.token_count.prompt, llm.token_count.total).

Errors on both wrappers are captured with full stack traces and set the span status to ERROR.


Disabling tracing

Set traceai.enabled: false in your application.yml. The auto-configuration won’t create any beans, and your app runs without any TraceAI overhead.

For per-environment control:

# application-prod.yml
traceai:
  enabled: true
  hide-inputs: true
  hide-outputs: true

# application-dev.yml
traceai:
  enabled: true
  enable-console-exporter: true

# application-test.yml
traceai:
  enabled: false

Debugging

Enable console export and DEBUG logging to see spans printed to stdout:

traceai:
  enable-console-exporter: true

logging:
  level:
    ai.traceai: DEBUG

Check that TraceAI initialized:

if (ai.traceai.TraceAI.isInitialized()) {
    System.out.println("TraceAI version: " + ai.traceai.TraceAI.getVersion());
}

Supported providers

The provider string you pass to TracedChatModel / TracedEmbeddingModel is just a label in span attributes. You can use any Spring AI provider:

Spring AI starterProvider string
spring-ai-openai-spring-boot-starter"openai"
spring-ai-anthropic-spring-boot-starter"anthropic"
spring-ai-azure-openai-spring-boot-starter"azure-openai"
spring-ai-vertex-ai-gemini-spring-boot-starter"vertex-ai"
spring-ai-bedrock-ai-spring-boot-starter"bedrock"
spring-ai-ollama-spring-boot-starter"ollama"
spring-ai-mistral-ai-spring-boot-starter"mistral"

Just swap the Spring AI dependency and change the provider string. The tracing wrapper doesn’t care which provider is underneath.

Was this page helpful?

Questions & Discussion