This guide covers how to optimize prompts using Python SDK, from dataset creation to refining prompts through iterative improvements. You’ll learn how to generate responses, evaluate their effectiveness, and enhance them using optimization techniques, ultimately selecting the best-performing prompt for deployment.


1. Installation and Setup

Before proceeding with optimization, ensure that the Future AGI SDK is installed and properly configured with your API credentials.

Installation

pip install futureagi

Set up API credentials

export FI_API_KEY="your_api_key"
export FI_SECRET_KEY="your_secret_key"

2. Creating a Dataset

Optimization requires a structured dataset that serves as input for generating and refining AI responses. If you don’t have a dataset yet, follow these steps to create one.

Initialize the Dataset Client

The DatasetClient manages dataset creation and operations. First, define the dataset properties:

from fi.datasets import DatasetClient, DatasetConfig
from fi.utils.types import ModelTypes

# Define dataset configuration
dataset_config = DatasetConfig(
    name="optimization_dataset",
    model_type=ModelTypes.GENERATIVE_LLM
)

# Initialize client
dataset_client = DatasetClient(
    fi_api_key="your_api_key",
    fi_secret_key="your_secret_key",
    fi_base_url="https://api.futureagi.com",
    dataset_config=dataset_config
)

Creating a Dataset from a File

If you have a dataset in a CSV, JSON, or Excel file, upload it:

dataset_client.create(source="data.csv")

This uploads the dataset, making it available for running prompts and optimizations.


4. Running a Prompt on the Dataset

Before optimizing a prompt, you need to define a baseline prompt. This serves as the starting point for evaluation.

dataset_client.add_run_prompt(
    name="summary_prompt",
    model="gpt-4o-mini",
    messages=[
        {
            "role": "user",
            "content": "Return a short summary of {{value_proposition}}"
        }
    ]
)

  • add_run_prompt() adds a new prompt to the dataset.
  • messages define how the AI is instructed, with {{value_proposition}} acting as a placeholder for dataset values. [hyperlink to run prompt section]

At this stage, every row in the dataset will be processed using this prompt, generating initial responses.


5. Evaluating AI Responses

Once the prompt has been run, we need to measure how effective the responses are using evaluation metrics.

evaluation = dataset_client.add_evaluation(
    name="tone_analysis",
    eval_template="Tone",
    input_column_name="summary_prompt",
    save_as_template=True,
)
  • add_evaluation() attaches an evaluation metric to analyze responses.
  • "tone_analysis" is the evaluation’s name.
  • "Tone" is a preset evaluation template [hyperlink to all eval definition]
  • input_column_name="summary_prompt" means it evaluates responses generated by our prompt.
  • save_as_template=True saves this evaluation for reuse in other experiments.

Now, every AI-generated response is assessed based on this evaluation metric.


6. Running Optimization

Optimization improves AI-generated responses by adjusting prompt structure based on evaluation feedback. This process systematically iterates through prompt variations to find the most effective version.

dataset_client.add_optimization(
    optimization_name="optimized_prompt_1",
    prompt_column_name="summary_prompt"
)
  • add_optimization() starts an optimization process.
  • "optimized_prompt_1" assigns a name to the optimized prompt.
  • "summary_prompt" specifies which prompt is being optimized.

7. Retrieving Optimized Results

Once the optimization is complete, you can retrieve the improved prompts and compare them with the original.

optimized_data = dataset_client.download(file_path="optimized_results.csv")

This will save the optimized prompt responses to a CSV file.