Prompt Template

Logs a chat data for evaluation using Future Agi client.

Creating an Client

from fi_client import Client

api_key = os.environ["FI_API_KEY"]
secret_key = os.environ["FI_SECRET_KEY"]
base_url = os.environ["FI_API_URL"]

fi_client = Client(api_key=api_key, secret_key=secret_key,uri=base_url)

Supported Model Types:

β€’ GENERATIVE_LLM: For text data.

β€’ GENERATIVE_IMAGE: For text and image data.

Supported Environments:

β€’ TRAINING: For models in the training phase.

β€’ VALIDATION: For models in the validation phase.

β€’ PRODUCTION: For models deployed in a production environment.

β€’ CORPUS: For models dealing with a large collection of data or corpus.

Sending a data for Evaluation

To log an event, you need to provide the required parameters such as model_id, model_type, environment, and optionally model_version, prediction_timestamp, conversation, and tags.

from fi.client import ModelTypes, Environments
import time

fi_client.log(
  model_id = "your-model-ID",
  model_type = ModelTypes.GENERATIVE_LLM,
  environment = Environments.PRODUCTION,
  model_version = "v2",
  prediction_timestamp =int(time.time()),
  consersation = {
    "chat_history": [
        {
            "role": "user",
            "content": INPUT_DATA,  # The input message content
            "variables": {  # Dictionary containing variable names and their values
                "KEY1": "VALUE1",
                "KEY2": "VALUE2",
                # Add more key-value pairs as needed
            },
            "prompt_template": PROMPT_TEMPLATE,  # The template used for the prompt
            "context": [["additional context 1", "description 1"], ["additional context 2", "description 2"]]  # Optional context
        },
        {
            "role": "assistant",
            "content": OUTPUT_DATA,  # The output message content
            "context": [["related context 1", "description 1"], ["related context 2", "description 2"]]  # Optional context
        }
    ]
},
  tags = {"category": "AI", "level": "advanced"}
).result()

Structure:

1. chat_history:

β€’ A list containing dictionaries, each representing a message in the conversation. Each dictionary can have several keys: role, content, variables, prompt_template, and context.

2. role:

β€’ Type: String.

β€’ Description: Represents the participant of the conversation (β€œuser” or β€œassistant”).

3. content:

β€’ Type: String.

β€’ Description: The text content of the message provided by the role.

4. variables (Optional):

β€’ Type: Dictionary.

β€’ Description: Contains key-value pairs where keys are variable names, and values are their corresponding values. Used for dynamically filling placeholders in prompt_template. This key is specific to messages from the β€œuser”.

5. prompt_template (Optional):

β€’ Type: String.

β€’ Description: A template that defines the structure of the prompt and may include placeholders for variables. This key is specific to messages from the β€œuser”.

6. context (Optional):

β€’ Type: List of pairs of strings.

β€’ Description: Provides additional context or supporting information for the message. Each element is a pair of strings in the format [["context_key", "context_value"], ...].

Description:

β€’ chat_history: A list of dictionaries where each dictionary represents a message with its properties.

β€’ role: Indicates the sender of the message, either β€œuser” or β€œassistant”.

β€’ content: The message content provided by the user or assistant.

β€’ variables: A dictionary where each key is a variable name, and each value is the corresponding value for that variable. This is used to dynamically fill placeholders in the prompt_template.

β€’ prompt_template: A template string that defines the structure of the prompt, potentially with placeholders that correspond to the variables.

β€’ context (Optional): A list of pairs, where each pair contains additional contextual information relevant to the message.

Last updated