FI Semantic Conventions
Use standardized attribute keys for spans to ensure consistent, queryable trace data across LLM models, frameworks, and vendors.
What it is
FI Semantic Conventions are a set of predefined attribute keys that hold special significance in the Future AGI platform. When you attach these keys to your spans, they are highlighted prominently in the UI and enable filtering, search, and analytics in the dashboard.
Semantic conventions standardize how LLM data is recorded across different models, frameworks, and vendors. They cover span-level data (inputs, outputs, model name, token counts), message structures, documents, embeddings, tool calls, reranker results, and more.
Use cases
- Consistent tracing — Use standardized keys across different LLM providers and frameworks so trace data is uniform and comparable.
- LLM data capture — Record model name, token counts, input/output messages, and prompt templates in a structured, queryable schema.
- Filtering and search — Filter and search traces in the Future AGI dashboard using well-known attribute keys.
- Retrieval and reranker tracing — Attach document scores, query strings, and model names to retrieval and reranker spans for RAG pipeline visibility.
- Session and user analytics — Use
session.idanduser.idto group traces and run per-user analytics.
How to
Install the package
Install the traceAI instrumentation package to access semantic convention constants.
pip install fi-instrumentation-otelnpm install @traceai/fi-core @opentelemetry/api Browse available attributes
Choose your language to view the available semantic convention classes and constants.
class SpanAttributes:
# Output related attributes
OUTPUT_VALUE = "output.value"
OUTPUT_MIME_TYPE = "output.mime_type"
# The type of output.value. If unspecified, the type is plain text by default.
# If type is JSON, the value is a string representing a JSON object.
INPUT_VALUE = "input.value"
INPUT_MIME_TYPE = "input.mime_type"
# The type of input.value. If unspecified, the type is plain text by default.
# If type is JSON, the value is a string representing a JSON object.
# Embedding related attributes
EMBEDDING_EMBEDDINGS = "embedding.embeddings"
# A list of objects containing embedding data, including the vector and represented piece of text.
EMBEDDING_MODEL_NAME = "embedding.model_name"
# The name of the embedding model.
# LLM related attributes
LLM_FUNCTION_CALL = "llm.function_call"
# For models and APIs that support function calling. Records attributes such as the function
# name and arguments to the called function.
LLM_INVOCATION_PARAMETERS = "llm.invocation_parameters"
# Invocation parameters passed to the LLM or API, such as the model name, temperature, etc.
LLM_INPUT_MESSAGES = "llm.input_messages"
# Messages provided to a chat API.
LLM_OUTPUT_MESSAGES = "llm.output_messages"
# Messages received from a chat API.
LLM_MODEL_NAME = "llm.model_name"
# The name of the model being used.
LLM_PROVIDER = "llm.provider"
# The provider of the model, such as OpenAI, Azure, Google, etc.
LLM_SYSTEM = "llm.system"
# The AI product as identified by the client or server
LLM_PROMPTS = "llm.prompts"
# Prompts provided to a completions API.
LLM_PROMPT_TEMPLATE = "llm.prompt_template.template"
# The prompt template as a Python f-string.
LLM_PROMPT_TEMPLATE_VARIABLES = "llm.prompt_template.variables"
# A list of input variables to the prompt template.
LLM_PROMPT_TEMPLATE_VERSION = "llm.prompt_template.version"
# The version of the prompt template being used.
LLM_TOKEN_COUNT_PROMPT = "llm.token_count.prompt"
# Number of tokens in the prompt.
LLM_TOKEN_COUNT_COMPLETION = "llm.token_count.completion"
# Number of tokens in the completion.
LLM_TOKEN_COUNT_TOTAL = "llm.token_count.total"
# Total number of tokens, including both prompt and completion.
LLM_TOOLS = "llm.tools"
# List of tools that are advertised to the LLM to be able to call
# Tool related attributes
TOOL_NAME = "tool.name"
# Name of the tool being used.
TOOL_DESCRIPTION = "tool.description"
# Description of the tool's purpose, typically used to select the tool.
TOOL_PARAMETERS = "tool.parameters"
# Parameters of the tool represented a dictionary JSON string
RETRIEVAL_DOCUMENTS = "retrieval.documents"
METADATA = "metadata"
# Metadata attributes are used to store user-defined key-value pairs.
TAG_TAGS = "tag.tags"
# Custom categorical tags for the span.
FI_SPAN_KIND = "fi.span.kind"
SESSION_ID = "session.id"
# The id of the session
USER_ID = "user.id"
# The id of the user
INPUT_IMAGES = "llm.input.images"
# A list of input images provided to the model.
EVAL_INPUT = "eval.input"
# Input being sent to the eval
RAW_INPUT = "raw.input"
# Raw input being sent to otel
RAW_OUTPUT = "raw.output"
# Raw output being sent from otel
QUERY = "query"
# The query being sent to the model
RESPONSE = "response"
# The response being sent from the model class MessageAttributes:
# Attributes for a message sent to or from an LLM
MESSAGE_ROLE = "message.role"
# The role of the message, such as "user", "agent", "function".
MESSAGE_CONTENT = "message.content"
# The content of the message to or from the llm, must be a string.
MESSAGE_CONTENTS = "message.contents"
# The message contents to the llm, it is an array of message_content prefixed attributes.
MESSAGE_NAME = "message.name"
# The name of the message, often used to identify the function that was used to generate the message.
MESSAGE_TOOL_CALLS = "message.tool_calls"
# The tool calls generated by the model, such as function calls.
MESSAGE_FUNCTION_CALL_NAME = "message.function_call_name"
# The function name that is a part of the message list.
# This is populated for role 'function' or 'agent' as a mechanism to identify
# the function that was called during the execution of a tool.
MESSAGE_FUNCTION_CALL_ARGUMENTS_JSON = "message.function_call_arguments_json"
# The JSON string representing the arguments passed to the function during a function call.
MESSAGE_TOOL_CALL_ID = "message.tool_call_id"
# The id of the tool call. class DocumentAttributes:
# Attributes for a document.
DOCUMENT_ID = "document.id"
# The id of the document.
DOCUMENT_SCORE = "document.score"
# The score of the document
DOCUMENT_CONTENT = "document.content"
# The content of the document.
DOCUMENT_METADATA = "document.metadata"
# The metadata of the document represented as a dictionary JSON string class RerankerAttributes:
# Attributes for a reranker
RERANKER_INPUT_DOCUMENTS = "reranker.input_documents"
# List of documents as input to the reranker
RERANKER_OUTPUT_DOCUMENTS = "reranker.output_documents"
# List of documents as output from the reranker
RERANKER_QUERY = "reranker.query"
# Query string for the reranker
RERANKER_MODEL_NAME = "reranker.model_name"
# Model name of the reranker
RERANKER_TOP_K = "reranker.top_k"
# Top K parameter of the reranker class EmbeddingAttributes:
# Attributes for an embedding
EMBEDDING_TEXT = "embedding.text"
# The text represented by the embedding.
EMBEDDING_VECTOR = "embedding.vector"
# The embedding vector. class ToolCallAttributes:
# Attributes for a tool call
TOOL_CALL_ID = "tool_call.id"
# The id of the tool call.
TOOL_CALL_FUNCTION_NAME = "tool_call.function.name"
# The name of function that is being called during a tool call.
TOOL_CALL_FUNCTION_ARGUMENTS_JSON = "tool_call.function.arguments"
# The JSON string representing the arguments passed to the function during a tool call. class ImageAttributes:
IMAGE_URL = "image.url"
# An http or base64 image url
class AudioAttributes:
AUDIO_URL = "audio.url"
# The url to an audio file
AUDIO_MIME_TYPE = "audio.mime_type"
# The mime type of the audio file
AUDIO_TRANSCRIPT = "audio.transcript"
# The transcript of the audio file
// Semantic Conventions for Span Attributes
export const SemanticConventions = {
// Input/Output related attributes
INPUT_VALUE: "input.value",
INPUT_MIME_TYPE: "input.mime_type",
OUTPUT_VALUE: "output.value",
OUTPUT_MIME_TYPE: "output.mime_type",
// LLM related attributes
LLM_INPUT_MESSAGES: "llm.input_messages",
LLM_OUTPUT_MESSAGES: "llm.output_messages",
LLM_MODEL_NAME: "llm.model_name",
LLM_PROVIDER: "llm.provider",
LLM_SYSTEM: "llm.system",
LLM_PROMPTS: "llm.prompts",
LLM_INVOCATION_PARAMETERS: "llm.invocation_parameters",
LLM_FUNCTION_CALL: "llm.function_call",
LLM_TOOLS: "llm.tools",
// Token count attributes
LLM_TOKEN_COUNT_PROMPT: "llm.token_count.prompt",
LLM_TOKEN_COUNT_COMPLETION: "llm.token_count.completion",
LLM_TOKEN_COUNT_TOTAL: "llm.token_count.total",
LLM_TOKEN_COUNT_COMPLETION_DETAILS_REASONING: "llm.token_count.completion_details.reasoning",
LLM_TOKEN_COUNT_COMPLETION_DETAILS_AUDIO: "llm.token_count.completion_details.audio",
LLM_TOKEN_COUNT_PROMPT_DETAILS_CACHE_WRITE: "llm.token_count.prompt_details.cache_write",
LLM_TOKEN_COUNT_PROMPT_DETAILS_CACHE_READ: "llm.token_count.prompt_details.cache_read",
LLM_TOKEN_COUNT_PROMPT_DETAILS_AUDIO: "llm.token_count.prompt_details.audio",
// Prompt template attributes
PROMPT_TEMPLATE_TEMPLATE: "llm.prompt_template.template",
PROMPT_TEMPLATE_VARIABLES: "llm.prompt_template.variables",
PROMPT_TEMPLATE_VERSION: "llm.prompt_template.version",
// Tool related attributes
TOOL_NAME: "tool.name",
TOOL_DESCRIPTION: "tool.description",
TOOL_PARAMETERS: "tool.parameters",
TOOL_JSON_SCHEMA: "tool.json_schema",
// Embedding attributes
EMBEDDING_EMBEDDINGS: "embedding.embeddings",
EMBEDDING_MODEL_NAME: "embedding.model_name",
EMBEDDING_TEXT: "embedding.text",
EMBEDDING_VECTOR: "embedding.vector",
// Retrieval attributes
RETRIEVAL_DOCUMENTS: "retrieval.documents",
// Session and user tracking
SESSION_ID: "session.id",
USER_ID: "user.id",
// Metadata and tagging
METADATA: "metadata",
TAG_TAGS: "tag.tags",
FI_SPAN_KIND: "fi.span.kind",
// Raw input/output
RAW_INPUT: "raw.input",
RAW_OUTPUT: "raw.output",
} as const;
// Span kind enumeration
export enum FISpanKind {
LLM = "LLM",
CHAIN = "CHAIN",
TOOL = "TOOL",
RETRIEVER = "RETRIEVER",
RERANKER = "RERANKER",
EMBEDDING = "EMBEDDING",
AGENT = "AGENT",
GUARDRAIL = "GUARDRAIL",
EVALUATOR = "EVALUATOR",
UNKNOWN = "UNKNOWN",
} // Message related semantic conventions
export const MessageConventions = {
MESSAGE_ROLE: "message.role",
MESSAGE_CONTENT: "message.content",
MESSAGE_CONTENTS: "message.contents",
MESSAGE_NAME: "message.name",
MESSAGE_TOOL_CALLS: "message.tool_calls",
MESSAGE_TOOL_CALL_ID: "message.tool_call_id",
MESSAGE_FUNCTION_CALL_NAME: "message.function_call_name",
MESSAGE_FUNCTION_CALL_ARGUMENTS_JSON: "message.function_call_arguments_json",
// Message content attributes
MESSAGE_CONTENT_TYPE: "message_content.type",
MESSAGE_CONTENT_TEXT: "message_content.text",
MESSAGE_CONTENT_IMAGE: "message_content.image",
} as const;
// Message content types
export const MessageContentTypes = {
TEXT: "text",
IMAGE: "image",
} as const; // Document related semantic conventions
export const DocumentConventions = {
DOCUMENT_ID: "document.id",
DOCUMENT_CONTENT: "document.content",
DOCUMENT_SCORE: "document.score",
DOCUMENT_METADATA: "document.metadata",
} as const; // Reranker related semantic conventions
export const RerankerConventions = {
RERANKER_INPUT_DOCUMENTS: "reranker.input_documents",
RERANKER_OUTPUT_DOCUMENTS: "reranker.output_documents",
RERANKER_QUERY: "reranker.query",
RERANKER_MODEL_NAME: "reranker.model_name",
RERANKER_TOP_K: "reranker.top_k",
} as const; // Embedding related semantic conventions
export const EmbeddingConventions = {
EMBEDDING_TEXT: "embedding.text",
EMBEDDING_VECTOR: "embedding.vector",
EMBEDDING_MODEL_NAME: "embedding.model_name",
EMBEDDING_EMBEDDINGS: "embedding.embeddings",
} as const; // Tool call related semantic conventions
export const ToolCallConventions = {
TOOL_CALL_ID: "tool_call.id",
TOOL_CALL_FUNCTION_NAME: "tool_call.function.name",
TOOL_CALL_FUNCTION_ARGUMENTS_JSON: "tool_call.function.arguments",
} as const; // Image related semantic conventions
export const ImageConventions = {
IMAGE_URL: "image.url",
} as const;
// Audio related semantic conventions
export const AudioConventions = {
AUDIO_URL: "audio.url",
AUDIO_MIME_TYPE: "audio.mime_type",
AUDIO_TRANSCRIPT: "audio.transcript",
} as const;
// Prompt related semantic conventions
export const PromptConventions = {
PROMPT_VENDOR: "prompt.vendor",
PROMPT_ID: "prompt.id",
PROMPT_URL: "prompt.url",
} as const;
// Common enums
export enum MimeType {
TEXT = "text/plain",
JSON = "application/json",
AUDIO_WAV = "audio/wav",
}
export enum LLMSystem {
OPENAI = "openai",
ANTHROPIC = "anthropic",
MISTRALAI = "mistralai",
COHERE = "cohere",
VERTEXAI = "vertexai",
}
export enum LLMProvider {
OPENAI = "openai",
ANTHROPIC = "anthropic",
MISTRALAI = "mistralai",
COHERE = "cohere",
// Cloud Providers of LLM systems
GOOGLE = "google",
AWS = "aws",
AZURE = "azure",
} Use semantic conventions in your code
Import the constants and set them as span attributes in your instrumented functions.
# pip install fi-instrumentation-otel
from fi_instrumentation.fi_types import SpanAttributes, FiSpanKindValues
def chat(message: str):
with tracer.start_as_current_span("an_llm_span") as span:
span.set_attribute(
SpanAttributes.FI_SPAN_KIND,
FiSpanKindValues.LLM.value
)
# Equivalent to:
# span.set_attribute(
# "fi.span.kind",
# "LLM",
# )
span.set_attribute(
SpanAttributes.INPUT_VALUE,
message,
)import { SemanticConventions, FISpanKind } from '@traceai/fi-semantic-conventions';
function chat(message: string) {
const span = tracer.startSpan("an_llm_span");
span.setAttributes({
[SemanticConventions.FI_SPAN_KIND]: FISpanKind.LLM,
[SemanticConventions.INPUT_VALUE]: message,
[SemanticConventions.LLM_MODEL_NAME]: "gpt-4",
});
// Your LLM logic here...
span.setAttributes({
[SemanticConventions.OUTPUT_VALUE]: response,
[SemanticConventions.LLM_TOKEN_COUNT_TOTAL]: tokenCount,
});
span.end();
} Convert messages to span attributes
OpenTelemetry span attributes must be simple types (bool, str, bytes, int, float, or flat lists of these). To export a list of message objects, flatten each object using an index prefix.
# List of messages from OpenAI or another LLM provider
messages = [{"message.role": "user", "message.content": "hello"},
{"message.role": "assistant", "message.content": "hi"}]
# Assuming you have a span object already created
for i, obj in enumerate(messages):
for key, value in obj.items():
span.set_attribute(f"input.messages.{i}.{key}", value)import { MessageConventions } from '@traceai/fi-semantic-conventions';
// List of messages from OpenAI or another LLM provider
const messages = [
{ "message.role": "user", "message.content": "hello" },
{ "message.role": "assistant", "message.content": "hi" }
];
// Assuming you have a span object already created
messages.forEach((obj, i) => {
Object.entries(obj).forEach(([key, value]) => {
span.setAttribute(`input.messages.${i}.${key}`, value);
});
});
// Or using semantic conventions constants:
messages.forEach((message, i) => {
span.setAttributes({
[`input.messages.${i}.${MessageConventions.MESSAGE_ROLE}`]: message["message.role"],
[`input.messages.${i}.${MessageConventions.MESSAGE_CONTENT}`]: message["message.content"],
});
}); Attribute overview
| Attribute | Type | Example | Description |
|---|---|---|---|
document.content | String | ”This is a sample document content.” | The content of a retrieved document |
document.id | String/Integer | ”1234” or 1 | Unique identifier for a document |
document.metadata | JSON String | "{'author': 'John Doe', 'date': '2023-09-09'}" | Metadata associated with a document |
document.score | Float | 0.98 | Score representing the relevance of a document |
embedding.embeddings | List of objects | [{"embedding.vector": [...], "embedding.text": "hello"}] | List of embedding objects including text and vector data |
embedding.model_name | String | ”BERT-base” | Name of the embedding model used |
embedding.text | String | ”hello world” | The text represented in the embedding |
embedding.vector | List of floats | [0.123, 0.456, …] | The embedding vector consisting of a list of floats |
exception.escaped | Boolean | true | Indicator if the exception has escaped the span’s scope |
exception.message | String | ”Null value encountered” | Detailed message describing the exception |
exception.stacktrace | String | ”at app.main(app.java:16)“ | The stack trace of the exception |
exception.type | String | ”NullPointerException” | The type of exception that was thrown |
input.mime_type | String | ”text/plain” or “application/json” | MIME type representing the format of input.value |
input.value | String | "{'query': 'What is the weather today?'}" | The input value to an operation |
llm.function_call | JSON String | "{function_name: 'add', args: [1, 2]}" | Object recording details of a function call in models or APIs |
llm.input_messages | List of objects | [{"message.role": "user", "message.content": "hello"}] | List of messages sent to the LLM in a chat API request |
llm.invocation_parameters | JSON string | "{'model_name': 'gpt-3', 'temperature': 0.7}" | Parameters used during the invocation of an LLM or API |
llm.model_name | String | ”gpt-3.5-turbo” | The name of the language model being utilized |
llm.output_messages | List of objects | [{"message.role": "user", "message.content": "hello"}] | List of messages received from the LLM in a chat API request |
llm.prompt_template.template | String | "Weather forecast for {city} on {date}" | Template used to generate prompts as Python f-strings |
llm.prompt_template.variables | JSON String | "{'context': '<context from retrieval>', 'subject': 'math'}" | JSON of key value pairs applied to the prompt template |
llm.prompt_template.version | String | ”v1.0” | The version of the prompt template |
llm.token_count.completion | Integer | 15 | The number of tokens in the completion |
llm.token_count.prompt | Integer | 5 | The number of tokens in the prompt |
llm.token_count.total | Integer | 20 | Total number of tokens, including prompt and completion |
message.content | String | ”What’s the weather today?” | The content of a message in a chat |
message.function_call_arguments_json | JSON String | "{'x': 2}" | The arguments to the function call in JSON |
message.function_call_name | String | ”multiply” or “subtract” | Function call function name |
message.role | String | ”user” or “system” | Role of the entity in a message (e.g., user, system) |
message.tool_calls | List of objects | [{"tool_call.function.name": "get_current_weather"}] | List of tool calls (e.g. function calls) generated by the LLM |
metadata | JSON String | "{'author': 'John Doe', 'date': '2023-09-09'}" | Metadata associated with a span |
fi.span.kind | String | ”CHAIN” | The kind of span (e.g., CHAIN, LLM, RETRIEVER, RERANKER) |
output.mime_type | String | ”text/plain” or “application/json” | MIME type representing the format of output.value |
output.value | String | ”Hello, World!” | The output value of an operation |
reranker.input_documents | List of objects | [{"document.id": "1", "document.score": 0.9, "document.content": "..."}] | List of documents as input to the reranker |
reranker.model_name | String | ”cross-encoder/ms-marco-MiniLM-L-12-v2” | Model name of the reranker |
reranker.output_documents | List of objects | [{"document.id": "1", "document.score": 0.9, "document.content": "..."}] | List of documents outputted by the reranker |
reranker.query | String | ”How to format timestamp?” | Query parameter of the reranker |
reranker.top_k | Integer | 3 | Top K parameter of the reranker |
retrieval.documents | List of objects | [{"document.id": "1", "document.score": 0.9, "document.content": "..."}] | List of retrieved documents |
session.id | String | ”26bcd3d2-cad2-443d-a23c-625e47f3324a” | Unique identifier for a session |
tag.tags | List of strings | [“shopping”, “travel”] | List of tags to give the span a category |
tool.description | String | ”An API to get weather data.” | Description of the tool’s purpose and functionality |
tool.name | String | ”WeatherAPI” | The name of the tool being utilized |
tool.parameters | JSON string | "{'a': 'int'}" | The parameters definition for invoking the tool |
tool_call.function.arguments | JSON string | "{'city': 'London'}" | The arguments for the function being invoked by a tool call |
tool_call.function.name | String | ”get_current_weather” | The name of the function being invoked by a tool call |
user.id | String | ”9328ae73-7141-4f45-a044-8e06192aa465” | Unique identifier for a user |
Key concepts
SpanAttributes— Python class containing attribute key constants for span-level data (inputs, outputs, model name, token counts, prompt templates, and more). Import fromfi_instrumentation.fi_types.MessageAttributes— Attribute keys for structuring LLM input/output messages (role, content, tool calls, function call details).DocumentAttributes— Attribute keys for retrieved documents, including ID, content, score, and metadata.RerankerAttributes— Attribute keys for reranker spans (input/output documents, query, model name, top-k).EmbeddingAttributes— Attribute keys for embedding spans (text and vector).ToolCallAttributes— Attribute keys for tool call objects generated by an LLM (ID, function name, arguments).FiSpanKindValues— Enumeration of valid values forfi.span.kind:LLM,CHAIN,RETRIEVER,RERANKER,EMBEDDING,AGENT,TOOL,GUARDRAIL,EVALUATOR,UNKNOWN.- Flattening — OpenTelemetry span attributes must be simple scalar types or flat lists. Nested objects (such as lists of messages) must be flattened with index prefixes like
llm.input_messages.0.message.role.
What you can do next
Add Attributes & Metadata
Attach custom data, tags, session IDs, and prompt templates to spans.
Instrument with traceAI Helpers
Use FITracer decorators and context managers for typed spans.
Set Up Tracing
Register a tracer provider and add instrumentation.
Auto Instrumentation
Browse all supported framework instrumentors.