LangChain

Integrate LangChain with Future AGI for auto-instrumented tracing. Capture chain executions, tool calls, and LLM interactions with traceAI-langchain.

1. Installation

First install the traceAI package and necessary LangChain packages.

pip install traceAI-langchain
pip install langchain_openai
npm install @traceai/langchain @traceai/fi-core @opentelemetry/instrumentation \
  @langchain/openai @langchain/core

2. Set Environment Variables

Set up your environment variables to authenticate with both FutureAGI and OpenAI.

import os

os.environ["OPENAI_API_KEY"] = "your-openai-api-key"
os.environ["FI_API_KEY"] = "your-futureagi-api-key"
os.environ["FI_SECRET_KEY"] = "your-futureagi-secret-key"
process.env.OPENAI_API_KEY = "your-openai-api-key";
process.env.FI_API_KEY = "your-futureagi-api-key";
process.env.FI_SECRET_KEY = "your-futureagi-secret-key";

3. Initialize Trace Provider

Set up the trace provider to create a new project in FutureAGI, establish telemetry data pipelines .

from fi_instrumentation import register
from fi_instrumentation.fi_types import ProjectType

trace_provider = register(
    project_type=ProjectType.OBSERVE,
    project_name="langchain_project",
)
import { register, ProjectType } from "@traceai/fi-core";

const tracerProvider = register({
  project_type: ProjectType.OBSERVE,
  project_name: "langchain_project",
});

4. Instrument your Project

Initialize the LangChain Instrumentor to enable automatic tracing. This step ensures that all interactions with the LangChain are tracked and monitored.

from traceai_langchain import LangChainInstrumentor

LangChainInstrumentor().instrument(tracer_provider=trace_provider)
import { LangChainInstrumentation } from "@traceai/langchain";
import * as CallbackManagerModule from "langchain/callbacks";

// Pass the custom tracer provider to the instrumentation
const lcInstrumentation = new LangChainInstrumentation({
  tracerProvider: tracerProvider,
});

// Manually instrument the LangChain module
lcInstrumentation.manuallyInstrument(CallbackManagerModule);

5. Create LangChain Components

Set up your LangChain pipeline as you normally would. Our Instrumentor will automatically trace and send the telemetry data to our platform.

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_template("{x} {y} {z}?").partial(x="why is", z="blue")
chain = prompt | ChatOpenAI(model_name="gpt-3.5-turbo")

result = chain.invoke({"y": "sky"})

print(f"Response: {result}")
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";

const prompt = ChatPromptTemplate.fromTemplate("{x} {y} {z}?").partial({ x: "why is", z: "blue" });
const chain = prompt.pipe(new ChatOpenAI({ model: "gpt-3.5-turbo" }));

const result = await chain.invoke({ y: "sky" });
console.log("Response:", result);
Was this page helpful?

Questions & Discussion