1. Installation

Install the FutureAGI package to access the observability framework.

pip install futureagi

2. Environment Configuration

Set up your environment variables to authenticate with FutureAGI services. These credentials enable:

  • Authentication with FutureAGI’s observability platform
  • Encrypted telemetry data transmission
import os
os.environ["FI_API_KEY"] = "your-futureagi-api-key"
os.environ["FI_SECRET_KEY"] = "your-futureagi-secret-key"

3. Configure Evaluation Tags

Define evaluation criteria for monitoring LLM responses. Evaluation tags allow you to:

  • Define custom evaluation criteria
  • Set up automated response quality checks
  • Track model performance metrics
from fi.integrations.otel.types import EvalName, EvalSpanKind, EvalTag, EvalTagType

eval_tags = [
    EvalTag(
        eval_name=EvalName.DETERMINISTIC_EVALS,
        value=EvalSpanKind.TOOL,
        type=EvalTagType.OBSERVATION_SPAN,
        config={
            "multi_choice": False,
            "choices": ["Yes", "No"],
            "rule_prompt": "Evaluate if the response is correct",
        },
        custom_eval_name="det_eval_vertexai_1"
    )
]

4. Initialize Trace Provider

Set up the trace provider to establish the observability pipeline. The trace provider:

  • Creates a new project in FutureAGI
  • Establishes telemetry data pipelines
  • Configures version tracking
  • Sets up evaluation frameworks
from fi.integrations.otel import register
from fi.integrations.otel.types import ProjectType

trace_provider = register(
    project_type=ProjectType.EXPERIMENT,
    project_name="vertex_ai_app",
    project_version_name="v1",
    eval_tags=eval_tags
)

  1. Configure Vertex AI Instrumentation Initialize the Vertex AI instrumentor to enable automatic tracing.
from fi.integrations.otel import VertexAIInstrumentor

VertexAIInstrumentor().instrument(tracer_provider=trace_provider)

6. Install Required Dependencies

Install the necessary Vertex AI components required for your project.

pip install vertexai

  1. Create Vertex AI Components Set up your Vertex AI components with built-in observability.
import vertexai
from vertexai.generative_models import FunctionDeclaration, GenerativeModel, Part, Tool

vertexai.init(
    project="project_name",
)

# Describe a function by specifying its schema (JsonSchema format)
get_current_weather_func = FunctionDeclaration(
    name="get_current_weather",
    description="Get the current weather in a given location",
    parameters={
        "type": "object",
        "properties": {
            "location": {
                "type": "string",
                "description": "The city and state, e.g. San Francisco, CA",
            },
            "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
        },
        "required": ["location"],
    },
)

# Tool is a collection of related functions
weather_tool = Tool(function_declarations=[get_current_weather_func])

# Use tools in chat
chat = GenerativeModel("gemini-1.5-flash", tools=[weather_tool]).start_chat()

  1. Execute Run your Vertex AI application.
if __name__ == "__main__":
    # Send a message to the model. The model will respond with a function call.
    for response in chat.send_message(
        "What is the weather like in Boston?", stream=True
    ):
        print(response)
    # Then send a function response to the model. The model will use it to answer.
    for response in chat.send_message(
        Part.from_function_response(
            name="get_current_weather",
            response={"content": {"weather": "super nice"}},
        ),
        stream=True,
    ):
        print(response)