When building applications, you’ll often need to capture additional context beyond what standard frameworks or LLM clients provide. Here’s how to enrich your traces with custom information.

Add attributes to a span

Attributes are key/value pairs that provide more information about the operation being traced. They help paint a complete picture of what’s happening in your application.

To avoid naming conflicts with semantic conventions, it’s recommended to prefix your custom attributes with your company name (e.g., mycompany.).

from opentelemetry import trace

current_span = trace.get_current_span()

current_span.set_attribute("operation.value", 1)
current_span.set_attribute("operation.name", "Saying hello!")
current_span.set_attribute("operation.other-stuff", [1, 2])

Leveraging Semantic Convention Attributes

Semantic Conventions provides a structured schema to represent common LLM application attributes. These are well known names for items like messages, prompt templates, metadata, and more. We’ve built a set of semantic conventions as part of the traceAI package.

Defining attributes is vital for comprehending the data and message flow within your LLM application and helps in debugging and analysis. By defining attributes like OUTPUT_VALUE and OUTPUT_MESSAGES, you can capture essential output information and interaction messages within a span’s context. This enables you to log the response and systematically categorize and store messages exchanged by components.

To use traceAI Semantic Attributes in Python, ensure you have the FI Instrumentation Package installed:

pip install fi-instrumentation-otel

Then run the following to set semantic attributes:

from fi_instrumentation.fi_types import SpanAttributes

span.set_attribute(SpanAttributes.OUTPUT_VALUE, response)

# This shows up under `output_messages` tab on the span page
span.set_attribute(
    f"{SpanAttributes.LLM_OUTPUT_MESSAGES}.0.{MessageAttributes.MESSAGE_ROLE}",
    "user",
)
span.set_attribute(
    f"{SpanAttributes.LLM_OUTPUT_MESSAGES}.0.{MessageAttributes.MESSAGE_CONTENT}",
    response,
)

Adding attributes to multiple spans at once

Our tracing system allows you to set attributes at the OpenTelemetry Context level, which automatically propagates to child spans within a parent trace.

Key Context Attributes include:

  • Metadata: Metadata associated with a span.
  • Tags: List of tags to give the span a category.
  • Session ID: Unique identifier for a session.
  • User ID: Unique identifier for a user.
  • Prompt Template:
    • Template: Used to generate prompts as Python f-strings.
    • Version: The version of the prompt template.
    • Variables: key-value pairs applied to the prompt template.

Below are the available context management functions for attribute handling.

using_metadata

This context manager enriches the current OpenTelemetry Context with metadata. Our auto-instrumentators will apply this metadata as span attributes following traceAI semantic conventions. The metadata must be provided as a string-keyed dictionary, which will be JSON-serialized in the context.

from fi_instrumentation import using_metadata
metadata = {
    "key-1": value_1,
    "key-2": value_2,
    ...
}
with using_metadata(metadata):
    # Calls within this block will generate spans with the attributes:
    # "metadata" = "{\"key-1\": value_1, \"key-2\": value_2, ... }" # JSON serialized
    ...

It can also be used as a decorator:

@using_metadata(metadata)
def call_fn(*args, **kwargs):
    # Calls within this function will generate spans with the attributes:
    # "metadata" = "{\"key-1\": value_1, \"key-2\": value_2, ... }" # JSON serialized
    ...

using_tags

Enhance spans with categorical information using this context manager. It adds tags to the OpenTelemetry Context, which our auto-instrumentators will apply following traceAI conventions. Tags must be provided as a list of strings.

from fi_instrumentation import using_tags
tags = ["tag_1", "tag_2", ...]
with using_tags(tags):
    # Calls within this block will generate spans with the attributes:
    # "tag.tags" = "["tag_1","tag_2",...]"
    ...

It can also be used as a decorator:

@using_tags(tags)
def call_fn(*args, **kwargs):
    # Calls within this function will generate spans with the attributes:
    # "tag.tags" = "["tag_1","tag_2",...]"
    ...

using_prompt_template

Context manager to add a prompt template (including its version and variables) to the current OpenTelemetry Context. traceAI auto instrumentators will read this Context and pass the prompt template fields as span attributes, following the traceAI semantic conventions. Its inputs must be of the following type:

  • Template: non-empty string.
  • Version: non-empty string.
  • Variables: a dictionary with string keys. This dictionary will be serialized to JSON when saved to the OTEL Context and remain a JSON string when sent as a span attribute.
from fi_instrumentation import using_prompt_template
prompt_template = "Please describe the weather forecast for {city} on {date}"
prompt_template_variables = {"city": "Johannesburg", "date":"July 11"}
with using_prompt_template(
    template=prompt_template,
    version=prompt_template_variables,
    variables="v1.0",
    ):
    # Calls within this block will generate spans with the attributes:
    # "llm.prompt_template.template" = "Please describe the weather forecast for {city} on {date}"
    # "llm.prompt_template.version" = "v1.0"
    # "llm.prompt_template.variables" = "{\"city\": \"Johannesburg\", \"date\": \"July 11\"}" # JSON serialized
    ...

It can also be used as a decorator:

@using_prompt_template(
    template=prompt_template,
    version=prompt_template_variables,
    variables="v1.0",
)
def call_fn(*args, **kwargs):
    # Calls within this function will generate spans with the attributes:
    # "llm.prompt_template.template" = "Please describe the weather forecast for {city} on {date}"
    # "llm.prompt_template.version" = "v1.0"
    # "llm.prompt_template.variables" = "{\"city\": \"Johannesburg\", \"date\": \"July 11\"}" # JSON serialized
    ...

using_attributes

This versatile context manager combines the functionality of multiple attribute setters. It’s particularly useful when you need to set various types of attributes simultaneously.

from fi_instrumentation import using_attributes
tags = ["tag_1", "tag_2", ...]
metadata = {
    "key-1": value_1,
    "key-2": value_2,
    ...
}
prompt_template = "Please describe the weather forecast for {city} on {date}"
prompt_template_variables = {"city": "Johannesburg", "date":"July 11"}
prompt_template_version = "v1.0"
with using_attributes(
    session_id="my-session-id",
    user_id="my-user-id",
    metadata=metadata,
    tags=tags,
    prompt_template=prompt_template,
    prompt_template_version=prompt_template_version,
    prompt_template_variables=prompt_template_variables,
):
    # Calls within this block will generate spans with the attributes:
    # "session.id" = "my-session-id"
    # "user.id" = "my-user-id"
    # "metadata" = "{\"key-1\": value_1, \"key-2\": value_2, ... }" # JSON serialized
    # "tag.tags" = "["tag_1","tag_2",...]"
    # "llm.prompt_template.template" = "Please describe the weather forecast for {city} on {date}"
    # "llm.prompt_template.variables" = "{\"city\": \"Johannesburg\", \"date\": \"July 11\"}" # JSON serialized
    # "llm.prompt_template.version " = "v1.0"
    ...

The previous example is equivalent to doing the following, making using_attributes a very convenient tool for the more complex settings.

with (
    using_session("my-session-id"),
    using_user("my-user-id"),
    using_metadata(metadata),
    using_tags(tags),
    using_prompt_template(
        template=prompt_template,
        version=prompt_template_version,
        variables=prompt_template_variables,
    ),
):
    # Calls within this block will generate spans with the attributes:
    # "session.id" = "my-session-id"
    # "user.id" = "my-user-id"
    # "metadata" = "{\"key-1\": value_1, \"key-2\": value_2, ... }" # JSON serialized
    # "tag.tags" = "["tag_1","tag_2",...]"
    # "llm.prompt_template.template" = "Please describe the weather forecast for {city} on {date}"
    # "llm.prompt_template.variables" = "{\"city\": \"Johannesburg\", \"date\": \"July 11\"}" # JSON serialized
    # "llm.prompt_template.version " = "v1.0"
    ...

It can also be used as a decorator:

@using_attributes(
    session_id="my-session-id",
    user_id="my-user-id",
    metadata=metadata,
    tags=tags,
    prompt_template=prompt_template,
    prompt_template_version=prompt_template_version,
    prompt_template_variables=prompt_template_variables,
)
def call_fn(*args, **kwargs):
    # Calls within this function will generate spans with the attributes:
    # "session.id" = "my-session-id"
    # "user.id" = "my-user-id"
    # "metadata" = "{\"key-1\": value_1, \"key-2\": value_2, ... }" # JSON serialized
    # "tag.tags" = "["tag_1","tag_2",...]"
    # "llm.prompt_template.template" = "Please describe the weather forecast for {city} on {date}"
    # "llm.prompt_template.variables" = "{\"city\": \"Johannesburg\", \"date\": \"July 11\"}" # JSON serialized
    # "llm.prompt_template.version " = "v1.0"
    ...

get_attributes_from_context

For convenience, our core instrumentation package provides get_attributes_from_context to retrieve context attributes. This is particularly useful for manual instrumentation scenarios.

In the following example, we assume the following are set in the OTEL context:

tags = ["tag_1", "tag_2"]
metadata = {
    "key-1": 1,
    "key-2": "2",
}
prompt_template = "Please describe the weather forecast for {city} on {date}"
prompt_template_variables = {"city": "Johannesburg", "date":"July 11"}
prompt_template_version = "v1.0"

We then use get_attributes_from_context to extract them from the OTEL context. You can use it in your manual instrumentation to attach these attributes to your spans.

from fi_instrumentation import get_attributes_from_context

span.set_attributes(dict(get_attributes_from_context()))
# The span will then have the following attributes attached:
# {
#    'session.id': 'my-session-id',
#    'user.id': 'my-user-id',
#    'metadata': '{"key-1": 1, "key-2": "2"}',
#    'tag.tags': ['tag_1', 'tag_2'],
#    'llm.prompt_template.template': 'Please describe the weather forecast for {city} on {date}',
#    'llm.prompt_template.version': 'v1.0',
#    'llm.prompt_template.variables': '{"city": "Johannesburg", "date": "July 11"}'
# }