Ollama
Set up auto-instrumentation for Ollama with Future AGI tracing. Use traceAI-openai to capture spans from Ollama's OpenAI-compatible local LLM API.
1. Installation
First install the traceAI package to access the observability framework
pip install traceAI-openai
2. Set Environment Variables
Set up your environment variables to authenticate with FutureAGI.
import os
os.environ["FI_API_KEY"] = "your-futureagi-api-key"
os.environ["FI_SECRET_KEY"] = "your-futureagi-secret-key"
3. Initialize Trace Provider
Set up the trace provider to create a new project in FutureAGI, establish telemetry data pipelines .
from fi_instrumentation import register
from fi_instrumentation.fi_types import ProjectType
trace_provider = register(
project_type=ProjectType.OBSERVE,
project_name="OLLAMA 3.2",
)
4. Instrument your Project
Use the OpenAI Instrumentor to instrument your project, as the OpenAI Client is utilized for interactions with Ollama. This step guarantees that all interactions are tracked and monitored. If you are using a different client to interact with Ollama, use that client’s Instrumentor instead.
from traceai_openai import OpenAIInstrumentor
OpenAIInstrumentor().instrument(tracer_provider=trace_provider)
5. Interact with Ollama
Interact with the Ollama as you normally would. Our Instrumentor will automatically trace and send the telemetry data to our platform. Make sure that Ollama is running and accessible from your project.
from openai import OpenAI
client = OpenAI(
base_url = 'http://localhost:11434/v1',
api_key='ollama',
)
response = client.chat.completions.create(
model="llama3.2:1b",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is OpenAI?"},
]
)
print(response.choices[0].message.content)