1. Installation
Install the traceAI and litellm packages.
pip install traceAI-litellm
pip install litellm
2. Set Environment Variables
Set up your environment variables to authenticate with both FutureAGI and OpenAI.
import os
os.environ["FI_API_KEY"] = "your-futureagi-api-key"
os.environ["FI_SECRET_KEY"] = "your-futureagi-secret-key"
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"
3. Initialize Trace Provider
Set up the trace provider to create a new project in FutureAGI, establish telemetry data pipelines .
from fi_instrumentation import register
from fi_instrumentation.fi_types import ProjectType
trace_provider = register(
project_type=ProjectType.OBSERVE,
project_name="openai_project",
)
Initialize the LiteLLM instrumentor to enable automatic tracing.
from traceai_litellm import LiteLLMInstrumentor
LiteLLMInstrumentor().instrument(tracer_provider=trace_provider)
5. Run LiteLLM
Run LiteLLM as you normally would. Our Instrumentor will automatically trace and send the telemetry data to our platform.
import litellm
response = litellm.completion(
model="gpt-3.5-turbo",
messages=[{"content": "What's the capital of India?"}],
)
print(response.choices[0].message.content)