1. Installation
Install the traceAI and Llama Index packages.
pip install traceAI-llamaindex
pip install llama-index
2. Set Environment Variables
Set up your environment variables to authenticate with FutureAGI.
import os
os.environ["FI_API_KEY"] = "your-futureagi-api-key"
os.environ["FI_SECRET_KEY"] = "your-futureagi-secret-key"
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"
3. Initialize Trace Provider
Set up the trace provider to create a new project in FutureAGI, establish telemetry data pipelines .
from fi_instrumentation import register
from fi_instrumentation.fi_types import ProjectType
trace_provider = register(
project_type=ProjectType.OBSERVE,
project_name="llamaindex_project",
)
4. Instrument your Project
Initialize the Llama Index instrumentor to enable automatic tracing. This step ensures that all interactions with the Llama Index are tracked and monitored.
from traceai_llamaindex import LlamaIndexInstrumentor
LlamaIndexInstrumentor().instrument(tracer_provider=trace_provider)
5. Create Llama Index Components
Set up your Llama Index components as you normally would. Our Instrumentor will automatically trace and send the telemetry data to our platform.
from llama_index.agent.openai import OpenAIAgent
from llama_index.core import Settings
from llama_index.core.tools import FunctionTool
from llama_index.llms.openai import OpenAI
def multiply(a: int, b: int) -> int:
"""Multiply two integers and return the result."""
return a * b
def add(a: int, b: int) -> int:
"""Add two integers and return the result."""
return a + b
multiply_tool = FunctionTool.from_defaults(fn=multiply)
add_tool = FunctionTool.from_defaults(fn=add)
agent = OpenAIAgent.from_tools([multiply_tool, add_tool])
Settings.llm = OpenAI(model="gpt-3.5-turbo")
response = agent.query("What is (121 * 3) + 42?")
print(response)