1. Installation

First install the traceAI and promptflow packages.

pip install traceAI-openai promptflow promptflow-tools

2. Set Environment Variables

Set up your environment variables to authenticate with both FutureAGI and OpenAI services.

import os

os.environ["OPENAI_API_KEY"] = "your-openai-api-key"
os.environ["FI_API_KEY"] = "your-futureagi-api-key"
os.environ["FI_SECRET_KEY"] = "your-futureagi-secret-key"

3. Initialize Trace Provider

Set up the trace provider to create a new project in FutureAGI, establish telemetry data pipelines .

from fi_instrumentation import register
from fi_instrumentation.fi_types import ProjectType

trace_provider = register(
    project_type=ProjectType.OBSERVE,
    project_name="promptflow",
)

4. Instrument your Project

Instrument your Project with OpenAI Instrumentor. This step ensures that all interactions with the PromptFlow are tracked and monitored.

from traceai_openai import OpenAIInstrumentor

OpenAIInstrumentor().instrument(tracer_provider=trace_provider)

5. Prepare the chat.prompty File

Create a chat.prompty file in the same directory as your script with the following content:

---
name: Basic Chat
model:
  api: chat
  configuration:
    type: azure_openai
    azure_deployment: gpt-4o
  parameters:
    temperature: 0.2
    max_tokens: 1024
inputs: 
  question:
    type: string
  chat_history:
    type: list
sample:
  question: "What is Prompt flow?"
  chat_history: []
---

system:
You are a helpful assistant.

{% for item in chat_history %}
{{item.role}}:
{{item.content}}
{% endfor %}

user:
{{question}}

This will ensure that users have the necessary configuration to create the chat.prompty file and use it with the ChatFlow class.


6. Create a Flow

Create a Flow as you normally would. Our Instrumentor will automatically trace and send the telemetry data to our platform.

from pathlib import Path
from promptflow.core import OpenAIModelConfiguration, Prompty


BASE_DIR = Path(__file__).absolute().parent

class ChatFlow:
    def __init__(self, model_config: OpenAIModelConfiguration, max_total_token=4096):
        self.model_config = model_config
        self.max_total_token = max_total_token

    def __call__(
        self,
        question: str = "What's Azure Machine Learning?",
        chat_history: list = [],
    ) -> str:
        """Flow entry function."""

        prompty = Prompty.load(
            source=BASE_DIR / "chat.prompty",
            model={"configuration": self.model_config},
        )

        output = prompty(question=question, chat_history=chat_history)

        return output

7. Execute the Flow

from promptflow.client import PFClient
from promptflow.connections import OpenAIConnection

pf = PFClient()

connection = OpenAIConnection(
    name="open_ai_connection",
    base_url="https://api.openai.com/v1",
    api_key=os.environ["OPENAI_API_KEY"],
)

conn = pf.connections.create_or_update(connection)

config = OpenAIModelConfiguration(
    connection="open_ai_connection", model="gpt-3.5-turbo"
)

chat_flow = ChatFlow(config)
result = chat_flow(question="What is ChatGPT? Please explain with concise statement")
print(result)