1. Installation

First install the traceAI package and necessary LangChain packages.

pip install traceAI-langchain
pip install langchain_openai

2. Set Environment Variables

Set up your environment variables to authenticate with both FutureAGI and OpenAI.

import os

os.environ["OPENAI_API_KEY"] = "your-openai-api-key"
os.environ["FI_API_KEY"] = "your-futureagi-api-key"
os.environ["FI_SECRET_KEY"] = "your-futureagi-secret-key"

3. Initialize Trace Provider

Set up the trace provider to create a new project in FutureAGI, establish telemetry data pipelines .

from fi_instrumentation import register
from fi_instrumentation.fi_types import ProjectType

trace_provider = register(
    project_type=ProjectType.OBSERVE,
    project_name="langchain_project",
)

4. Instrument your Project

Initialize the LangChain Instrumentor to enable automatic tracing. This step ensures that all interactions with the LangChain are tracked and monitored.

from traceai_langchain import LangChainInstrumentor

LangChainInstrumentor().instrument(tracer_provider=trace_provider)

5. Create LangChain Components

Set up your LangChain pipeline as you normally would. Our Instrumentor will automatically trace and send the telemetry data to our platform.

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_template("{x} {y} {z}?").partial(x="why is", z="blue")
chain = prompt | ChatOpenAI(model_name="gpt-3.5-turbo")

result = chain.invoke({"y": "sky"})

print(f"Response: {result}")

Was this page helpful?