Python SDK
Annotate traces and manage annotation queues programmatically using the FutureAGI Python SDK.
Python SDK
The FutureAGI Python SDK provides a simple, DataFrame-based interface for logging annotations against your traces. Install the package, authenticate, and start annotating in minutes.
Installation
pip install futureagipip3 install futureagi Authentication
from fi.annotations import Annotation
client = Annotation(
fi_api_key="YOUR_API_KEY",
fi_secret_key="YOUR_SECRET_KEY",
)
Tip
You can also set FI_API_KEY and FI_SECRET_KEY as environment variables. The client picks them up automatically when no arguments are passed.
Log Annotations
The log_annotations() method accepts a pandas DataFrame where each row represents one annotation record. Columns follow the naming convention annotation.<label_name>.<type>.
Column naming convention
| Column Pattern | Label Type | Example Value |
|---|---|---|
annotation.<name>.text | Text | "good response" |
annotation.<name>.label | Categorical | "positive" |
annotation.<name>.score | Numeric | 8.5 |
annotation.<name>.rating | Star (1-5) | 4 |
annotation.<name>.thumbs | Thumbs Up/Down | True |
annotation.notes | Notes (shared) | "Great response!" |
context.span_id | (required) Span ID | "span_abc123" |
Note
Every row must include a context.span_id column. This links the annotation to a specific span in your Observe project.
Full example
import pandas as pd
from fi.annotations import Annotation
client = Annotation(
fi_api_key="YOUR_API_KEY",
fi_secret_key="YOUR_SECRET_KEY",
)
df = pd.DataFrame({
"context.span_id": ["span_abc123", "span_def456"],
"annotation.quality.text": ["Excellent response", "Needs improvement"],
"annotation.sentiment.label": ["positive", "negative"],
"annotation.accuracy.score": [9.0, 3.5],
"annotation.rating.rating": [5, 2],
"annotation.helpful.thumbs": [True, False],
"annotation.notes": ["Top quality", "Hallucinated facts"],
})
response = client.log_annotations(df, project_name="My Project")
print(f"Created: {response.annotations_created}, Errors: {response.errors_count}")
Response object
| Field | Type | Description |
|---|---|---|
message | str | Summary message |
annotations_created | int | New annotations created |
annotations_updated | int | Existing annotations updated |
notes_created | int | Notes created |
succeeded_count | int | Successful records |
errors_count | int | Failed records |
errors | list | Error details per failed record |
Get Labels
Retrieve all annotation labels configured for a project. Use the returned label IDs when constructing your DataFrame columns.
labels = client.get_labels(project_id="proj_123")
for label in labels:
print(f"{label.name} ({label.type}): {label.id}")
List Projects
List all projects accessible to your API key. Filter by project type to find your Observe projects.
projects = client.list_projects(project_type="observe")
for p in projects:
print(f"{p.name}: {p.id}")
Annotation Queues
Note
For queue management — creating queues, adding items, submitting annotations, and exporting results — use the REST API directly or the JavaScript SDK which provides full queue support. See the Queues API reference for details.
Best Practices
- Batch annotations — Group 100—500 records per DataFrame for optimal throughput.
- Consistent span IDs — Ensure span IDs match traces in your Observe project. Invalid IDs result in per-row errors.
- Idempotent notes — Duplicate notes for the same span are silently skipped.
- Error handling — Always check
response.errors_countand inspectresponse.errorsfor partial failures. - Label IDs — Use
get_labels()to fetch label names and IDs before constructing your DataFrame.
Warning
Annotations are immutable once submitted. Double-check your DataFrame before calling log_annotations().