LlamaIndex Workflows are a subset of the LlamaIndex package specifically designed to support agent development.

Our LlamaIndexInstrumentor automatically captures traces for LlamaIndex Workflows agents. If you’ve already enabled that instrumentor, you do not need to complete the steps below.

1. Installation

First install the traceAI and necessary llama-index packages.

pip install traceAI-llamaindex
pip install llama-index

2. Set Environment Variables

Set up your environment variables to authenticate with FutureAGI.

import os

os.environ["FI_API_KEY"] = "your-futureagi-api-key"
os.environ["FI_SECRET_KEY"] = "your-futureagi-secret-key"
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"

3. Initialize Trace Provider

Set up the trace provider to create a new project in FutureAGI, establish telemetry data pipelines .

from fi_instrumentation import register
from fi_instrumentation.fi_types import ProjectType

trace_provider = register(
    project_type=ProjectType.OBSERVE,
    project_name="openai_project",
)

4. Instrument your Project

Instrument your Project with LlamaIndex Instrumentor. This instrumentor will trace both LlamaIndex Workflows calls, as well as calls to the general LlamaIndex package.

from traceai_llamaindex import LlamaIndexInstrumentor

LlamaIndexInstrumentor().instrument(tracer_provider=trace_provider)

5. Run LlamaIndex Workflows

Run your LlamaIndex workflows as you normally would. Our Instrumentor will automatically trace and send the telemetry data to our platform.

import asyncio
from llama_index.core.workflow import (
    Event,
    StartEvent,
    StopEvent,
    Workflow,
    step,
)
from llama_index.llms.openai import OpenAI

class JokeEvent(Event):
    joke: str

class JokeFlow(Workflow):
    llm = OpenAI()

    @step
    async def generate_joke(self, ev: StartEvent) -> JokeEvent:
        topic = ev.topic

        prompt = f"Write your best joke about {topic}."
        response = await self.llm.acomplete(prompt)
        return JokeEvent(joke=str(response))

    @step
    async def critique_joke(self, ev: JokeEvent) -> StopEvent:
        joke = ev.joke

        prompt = f"Give a thorough analysis and critique of the following joke: {joke}"
        response = await self.llm.acomplete(prompt)
        return StopEvent(result=str(response))


async def main():
    w = JokeFlow(timeout=60, verbose=False)
    result = await w.run(topic="pirates")
    print(str(result))


if __name__ == "__main__":
    asyncio.run(main())

Was this page helpful?