Spans
A Span represents a single operation within an execution flow, recording input-output data, execution time, and errors. Each span provides insight into specific steps such as:- LLM Calls – Capturing model invocation, prompt processing, and response generation.
- Retrieval Operations – Logging queries made to external databases or indexes.
- Tool Executions – Tracking API calls and function invocations.
- Error Handling – Recording failures, timeouts, and system issues.
Traces
A Trace connects multiple spans to represent the full execution flow of a request. It provides a structured view of how different operations interact within an LLM-powered system. Traces help teams:- Analyze dependencies between retrieval, inference, and tool execution.
- Identify performance bottlenecks by measuring latency across spans.
- Debug unexpected behaviors by tracing execution paths from input to output.
- A retrieval span fetching relevant documents.
- An LLM span generating a response.
- A tool execution span calling an external API.
Projects
A Project provides a structured way to manage multiple traces, ensuring observability is organized across different applications, use cases, or deployments. Projects allow teams to:- Segment and categorize observability data for different LLM-powered applications.
- Compare model versions to track improvements in accuracy and performance.
- Filter and analyze execution trends across multiple traces.
- Customer Support AI – Handling traces related to automated support queries.
- Content Generation AI – Managing traces for LLM-powered writing assistants.
- Legal AI Assistant – Tracking execution flows for contract analysis tasks.