Frameworks (Java)

Trace LangChain4j and Semantic Kernel operations in Java. Framework-level wrappers that instrument chains, agents, and prompt invocations.

📝
TL;DR
  • LangChain4j: TracedChatLanguageModel implements ChatLanguageModel as a drop-in replacement
  • Semantic Kernel: TracedKernel wraps Kernel and traces function invocations and prompt calls
  • Both support any underlying LLM provider
  • For Spring AI, see the Spring Boot page

Prerequisites

Complete the Java SDK setup first.


LangChain4j

TracedChatLanguageModel implements the ChatLanguageModel interface directly, so it works as a drop-in replacement anywhere LangChain4j expects a chat model.

<dependency>
    <groupId>com.github.future-agi.traceAI</groupId>
    <artifactId>traceai-langchain4j</artifactId>
    <version>main-SNAPSHOT</version>
</dependency>
implementation 'com.github.future-agi.traceAI:traceai-langchain4j:main-SNAPSHOT'

Basic usage

import ai.traceai.TraceAI;
import ai.traceai.langchain4j.TracedChatLanguageModel;
import dev.langchain4j.model.chat.ChatLanguageModel;
import dev.langchain4j.model.openai.OpenAiChatModel;

TraceAI.initFromEnvironment();

// Create your LangChain4j model
ChatLanguageModel model = OpenAiChatModel.builder()
    .apiKey(System.getenv("OPENAI_API_KEY"))
    .modelName("gpt-4o-mini")
    .build();

// Wrap it - "openai" is the provider label for span attributes
TracedChatLanguageModel traced = new TracedChatLanguageModel(model, "openai");

// Use it like any ChatLanguageModel
String response = traced.generate("What is the capital of France?");
System.out.println(response);

With message lists

import dev.langchain4j.data.message.*;
import java.util.List;

var messages = List.of(
    SystemMessage.from("You are a helpful assistant."),
    UserMessage.from("What is the capital of France?")
);

var response = traced.generate(messages);
System.out.println(response.content().text());

With AI Services

Since TracedChatLanguageModel implements ChatLanguageModel, it plugs into LangChain4j’s AI Services:

import dev.langchain4j.service.AiServices;

interface Assistant {
    String chat(String message);
}

Assistant assistant = AiServices.builder(Assistant.class)
    .chatLanguageModel(traced)  // pass the traced model
    .build();

String answer = assistant.chat("What is 2 + 2?");

Span created: “LangChain4j Chat” with kind LLM

What gets captured

AttributeExample
llm.systemlangchain4j
llm.provideropenai (your provider string)
llm.token_count.prompt15
llm.token_count.completion25
llm.token_count.total40
Input/output messagesRole + content pairs

Tool execution requests are captured when the model returns tool calls.


Semantic Kernel

TracedKernel wraps Microsoft’s Semantic Kernel for Java. It traces function invocations and prompt calls. All operations are reactive (return Mono<T>).

<dependency>
    <groupId>com.github.future-agi.traceAI</groupId>
    <artifactId>traceai-java-semantic-kernel</artifactId>
    <version>main-SNAPSHOT</version>
</dependency>
implementation 'com.github.future-agi.traceAI:traceai-java-semantic-kernel:main-SNAPSHOT'

Basic usage

import ai.traceai.TraceAI;
import ai.traceai.semantickernel.TracedKernel;
import com.microsoft.semantickernel.Kernel;
import com.microsoft.semantickernel.services.chatcompletion.ChatCompletionService;

TraceAI.initFromEnvironment();

// Build your Semantic Kernel
Kernel kernel = Kernel.builder()
    .withAIService(ChatCompletionService.class, chatService)
    .build();

// Wrap it
TracedKernel traced = new TracedKernel(kernel);

Invoke a prompt

var result = traced.invokePromptAsync("What is the capital of France?")
    .block();  // reactive - call block() for sync

System.out.println(result.getResult());

Span created: “Semantic Kernel Prompt” with kind AGENT

Invoke a function

import com.microsoft.semantickernel.orchestration.KernelFunctionArguments;

var result = traced.invokeAsync(myFunction, KernelFunctionArguments.builder()
    .withVariable("input", "Hello world")
    .build())
    .block();

Span created: “Semantic Kernel: PluginName.FunctionName” with kind AGENT. The span name is built dynamically from the plugin and function names.

What gets captured

AttributeExample
semantic_kernel.function_namechat
semantic_kernel.plugin_nameConversationSummary
llm.token_count.prompt20
llm.token_count.completion30
llm.token_count.total50
input.valueThe prompt text or function arguments
output.valueThe function result

Token usage is extracted via reflection from FunctionResult.getMetadata().getUsage() when available.

Service-level wrappers

For finer-grained tracing, traceai-java-semantic-kernel also provides:

  • TracedChatCompletionService - wraps ChatCompletionService to trace individual LLM calls within a kernel invocation
  • TracedTextEmbeddingGenerationService - wraps embedding generation
Was this page helpful?

Questions & Discussion