Skip to main content

Python SDK

The FutureAGI Python SDK provides a simple, DataFrame-based interface for logging annotations against your traces. Install the package, authenticate, and start annotating in minutes.

Installation

pip install futureagi

Authentication

from fi.annotations import Annotation

client = Annotation(
    fi_api_key="YOUR_API_KEY",
    fi_secret_key="YOUR_SECRET_KEY",
)
You can also set FI_API_KEY and FI_SECRET_KEY as environment variables. The client picks them up automatically when no arguments are passed.

Log Annotations

The log_annotations() method accepts a pandas DataFrame where each row represents one annotation record. Columns follow the naming convention annotation.<label_name>.<type>.

Column naming convention

Column PatternLabel TypeExample Value
annotation.<name>.textText"good response"
annotation.<name>.labelCategorical"positive"
annotation.<name>.scoreNumeric8.5
annotation.<name>.ratingStar (1-5)4
annotation.<name>.thumbsThumbs Up/DownTrue
annotation.notesNotes (shared)"Great response!"
context.span_id(required) Span ID"span_abc123"
Every row must include a context.span_id column. This links the annotation to a specific span in your Observe project.

Full example

import pandas as pd
from fi.annotations import Annotation

client = Annotation(
    fi_api_key="YOUR_API_KEY",
    fi_secret_key="YOUR_SECRET_KEY",
)

df = pd.DataFrame({
    "context.span_id": ["span_abc123", "span_def456"],
    "annotation.quality.text": ["Excellent response", "Needs improvement"],
    "annotation.sentiment.label": ["positive", "negative"],
    "annotation.accuracy.score": [9.0, 3.5],
    "annotation.rating.rating": [5, 2],
    "annotation.helpful.thumbs": [True, False],
    "annotation.notes": ["Top quality", "Hallucinated facts"],
})

response = client.log_annotations(df, project_name="My Project")
print(f"Created: {response.annotations_created}, Errors: {response.errors_count}")

Response object

FieldTypeDescription
messagestrSummary message
annotations_createdintNew annotations created
annotations_updatedintExisting annotations updated
notes_createdintNotes created
succeeded_countintSuccessful records
errors_countintFailed records
errorslistError details per failed record

Get Labels

Retrieve all annotation labels configured for a project. Use the returned label IDs when constructing your DataFrame columns.
labels = client.get_labels(project_id="proj_123")

for label in labels:
    print(f"{label.name} ({label.type}): {label.id}")

List Projects

List all projects accessible to your API key. Filter by project type to find your Observe projects.
projects = client.list_projects(project_type="observe")

for p in projects:
    print(f"{p.name}: {p.id}")

Annotation Queues

For queue management — creating queues, adding items, submitting annotations, and exporting results — use the REST API directly or the JavaScript SDK which provides full queue support. See the Queues API reference for details.

Best Practices

  • Batch annotations — Group 100—500 records per DataFrame for optimal throughput.
  • Consistent span IDs — Ensure span IDs match traces in your Observe project. Invalid IDs result in per-row errors.
  • Idempotent notes — Duplicate notes for the same span are silently skipped.
  • Error handling — Always check response.errors_count and inspect response.errors for partial failures.
  • Label IDs — Use get_labels() to fetch label names and IDs before constructing your DataFrame.
Annotations are immutable once submitted. Double-check your DataFrame before calling log_annotations().

Next steps

JavaScript SDK

Full queue management, scores, and annotation support in JavaScript/TypeScript.

Scores API

Query and manage annotation scores via the REST API.

Bulk Annotation API

Upload annotations in bulk using the REST API directly.