1. Installation

First install the FutureAGI package to access the observability framework

pip install futureagi

2. Environment Configuration

Set up your environment variables to authenticate with both OpenAI and FutureAGI services. These credentials enable:

  • Secure access to OpenAI’s language models
  • Authentication with FutureAGI’s observability platform
  • Encrypted telemetry data transmission
import os
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"
os.environ["FI_API_KEY"] = "your-futureagi-api-key"
os.environ["FI_SECRET_KEY"] = "your-futureagi-secret-key"

3. Configure Evaluation Tags

Define evaluation criteria for monitoring LLM responses. Evaluation tags allow you to:

  • Define custom evaluation criteria
  • Set up automated response quality checks
  • Track model performance metrics
from fi.integrations.otel.types import EvalName, EvalSpanKind, EvalTag, EvalTagType

eval_tags = [
    EvalTag(
        eval_name=EvalName.DETERMINISTIC_EVALS,
        value=EvalSpanKind.TOOL,
        type=EvalTagType.OBSERVATION_SPAN,
        config={
            "multi_choice": False,
            "choices": ["Yes", "No"],
            "rule_prompt": "Evaluate if the response is correct",
        },
        custom_eval_name="det_eval_langchain_1"
    )
]

4. Initialize Trace Provider

Set up the trace provider to establish the observability pipeline. The trace provider:

  • Creates a new project in FutureAGI
  • Establishes telemetry data pipelines
  • Configures version tracking
  • Sets up evaluation frameworks
from fi.integrations.otel import register
from fi.integrations.otel.types import ProjectType

trace_provider = register(
    endpoint = "https://api.futureagi.com/tracer/observation-span/create_otel_span/",
    project_type=ProjectType.EXPERIMENT,
    project_name="langchain_app",
    project_version_name="v1",
    eval_tags=eval_tags
)

5. Configure LangChain Instrumentation

Initialize the LangChain instrumentor to enable automatic tracing.

from fi.integrations.otel import LangChainInstrumentor

LangChainInstrumentor().instrument(tracer_provider=trace_provider)

6. Install Required Dependencies

Install the necessary LangChain components required for your project.

pip install langchain_openai

7. Create LangChain Components

Set up your LangChain pipeline with built-in observability.

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_template("{x} {y} {z}?").partial(x="why is", z="blue")
chain = prompt | ChatOpenAI(model_name="gpt-3.5-turbo")

8. Execute

Run your LangChain application.

def run_chain():
    try:
        result = chain.invoke({"y": "sky"})
        print(f"Response: {result}")
    except Exception as e:
        print(f"Error executing chain: {e}")

if __name__ == "__main__":
    run_chain()