Frameworks (Java)
Trace LangChain4j and Semantic Kernel operations in Java. Framework-level wrappers that instrument chains, agents, and prompt invocations.
- LangChain4j:
TracedChatLanguageModelimplementsChatLanguageModelas a drop-in replacement - Semantic Kernel:
TracedKernelwrapsKerneland traces function invocations and prompt calls - Both support any underlying LLM provider
- For Spring AI, see the Spring Boot page
Prerequisites
Complete the Java SDK setup first.
LangChain4j
TracedChatLanguageModel implements the ChatLanguageModel interface directly, so it works as a drop-in replacement anywhere LangChain4j expects a chat model.
<dependency>
<groupId>com.github.future-agi.traceAI</groupId>
<artifactId>traceai-langchain4j</artifactId>
<version>main-SNAPSHOT</version>
</dependency>implementation 'com.github.future-agi.traceAI:traceai-langchain4j:main-SNAPSHOT' Basic usage
import ai.traceai.TraceAI;
import ai.traceai.langchain4j.TracedChatLanguageModel;
import dev.langchain4j.model.chat.ChatLanguageModel;
import dev.langchain4j.model.openai.OpenAiChatModel;
TraceAI.initFromEnvironment();
// Create your LangChain4j model
ChatLanguageModel model = OpenAiChatModel.builder()
.apiKey(System.getenv("OPENAI_API_KEY"))
.modelName("gpt-4o-mini")
.build();
// Wrap it - "openai" is the provider label for span attributes
TracedChatLanguageModel traced = new TracedChatLanguageModel(model, "openai");
// Use it like any ChatLanguageModel
String response = traced.generate("What is the capital of France?");
System.out.println(response);
With message lists
import dev.langchain4j.data.message.*;
import java.util.List;
var messages = List.of(
SystemMessage.from("You are a helpful assistant."),
UserMessage.from("What is the capital of France?")
);
var response = traced.generate(messages);
System.out.println(response.content().text());
With AI Services
Since TracedChatLanguageModel implements ChatLanguageModel, it plugs into LangChain4j’s AI Services:
import dev.langchain4j.service.AiServices;
interface Assistant {
String chat(String message);
}
Assistant assistant = AiServices.builder(Assistant.class)
.chatLanguageModel(traced) // pass the traced model
.build();
String answer = assistant.chat("What is 2 + 2?");
Span created: “LangChain4j Chat” with kind LLM
What gets captured
| Attribute | Example |
|---|---|
llm.system | langchain4j |
llm.provider | openai (your provider string) |
llm.token_count.prompt | 15 |
llm.token_count.completion | 25 |
llm.token_count.total | 40 |
| Input/output messages | Role + content pairs |
Tool execution requests are captured when the model returns tool calls.
Semantic Kernel
TracedKernel wraps Microsoft’s Semantic Kernel for Java. It traces function invocations and prompt calls. All operations are reactive (return Mono<T>).
<dependency>
<groupId>com.github.future-agi.traceAI</groupId>
<artifactId>traceai-java-semantic-kernel</artifactId>
<version>main-SNAPSHOT</version>
</dependency>implementation 'com.github.future-agi.traceAI:traceai-java-semantic-kernel:main-SNAPSHOT' Basic usage
import ai.traceai.TraceAI;
import ai.traceai.semantickernel.TracedKernel;
import com.microsoft.semantickernel.Kernel;
import com.microsoft.semantickernel.services.chatcompletion.ChatCompletionService;
TraceAI.initFromEnvironment();
// Build your Semantic Kernel
Kernel kernel = Kernel.builder()
.withAIService(ChatCompletionService.class, chatService)
.build();
// Wrap it
TracedKernel traced = new TracedKernel(kernel);
Invoke a prompt
var result = traced.invokePromptAsync("What is the capital of France?")
.block(); // reactive - call block() for sync
System.out.println(result.getResult());
Span created: “Semantic Kernel Prompt” with kind AGENT
Invoke a function
import com.microsoft.semantickernel.orchestration.KernelFunctionArguments;
var result = traced.invokeAsync(myFunction, KernelFunctionArguments.builder()
.withVariable("input", "Hello world")
.build())
.block();
Span created: “Semantic Kernel: PluginName.FunctionName” with kind AGENT. The span name is built dynamically from the plugin and function names.
What gets captured
| Attribute | Example |
|---|---|
semantic_kernel.function_name | chat |
semantic_kernel.plugin_name | ConversationSummary |
llm.token_count.prompt | 20 |
llm.token_count.completion | 30 |
llm.token_count.total | 50 |
input.value | The prompt text or function arguments |
output.value | The function result |
Token usage is extracted via reflection from FunctionResult.getMetadata().getUsage() when available.
Service-level wrappers
For finer-grained tracing, traceai-java-semantic-kernel also provides:
TracedChatCompletionService- wrapsChatCompletionServiceto trace individual LLM calls within a kernel invocationTracedTextEmbeddingGenerationService- wraps embedding generation