Skip to main content
This guide will walk you through the essential steps to optimize your first prompt using the agent-opt Python library. We’ll use the RandomSearchOptimizer to keep things simple and demonstrate the core workflow.

1. Installation and Setup

First, install the library and set up your environment variables to connect to Future AGI for evaluations. You can get your API keys from the Future AGI dashboard.
pip install agent-opt
import os

os.environ["FI_API_KEY"] = "YOUR_API_KEY"
os.environ["FI_SECRET_KEY"] = "YOUR_SECRET_KEY"

2. Prepare Your Dataset

Optimization is data-driven. You’ll need a dataset, which is a simple list of Python dictionaries. For this example, we’ll create a small dataset for a summarization task.
dataset = [
    {
        "article": "The James Webb Space Telescope has captured stunning new images of the Pillars of Creation, revealing intricate details of gas and dust clouds where new stars are forming.",
        "target_summary": "The JWST has taken new, detailed pictures of the Pillars of Creation."
    },
    {
        "article": "Researchers have discovered a new enzyme that can break down plastics at record speed, offering a potential solution to the global plastic pollution crisis.",
        "target_summary": "A new enzyme that rapidly breaks down plastics has been found."
    },
]

3. Configure and Run the Optimization

Now, let’s set up the components and run the optimization. We’ll configure an Evaluator to score our prompts, a DataMapper to connect our data, and the RandomSearchOptimizer to run the process.
from fi.opt.optimizers import RandomSearchOptimizer
from fi.opt.generators import LiteLLMGenerator
from fi.opt.datamappers import BasicDataMapper
from fi.opt.base.evaluator import Evaluator

# a. Define the generator with the initial prompt to be optimized
initial_generator = LiteLLMGenerator(
    model="gpt-4o-mini",
    prompt_template="Summarize this: {article}"
)

# b. Setup the evaluator to score prompt performance
evaluator = Evaluator(
    eval_template="summary_quality",  # A built-in template for summarization
    eval_model_name="turing_flash"    # The model to perform the evaluation
)

# c. Setup the data mapper to link dataset fields
data_mapper = BasicDataMapper(
    key_map={"input": "article", "output": "generated_output"}
)

# d. Initialize the Random Search optimizer
optimizer = RandomSearchOptimizer(
    generator=initial_generator,
    teacher_model="gpt-4o",  # A powerful model to generate prompt ideas
    num_variations=5         # Generate 5 different versions of our prompt
)

# e. Run the optimization!
result = optimizer.optimize(
    evaluator=evaluator,
    data_mapper=data_mapper,
    dataset=dataset
)

4. Analyze the Results

The result object contains the best prompt found and its final score.
# Print the best prompt and its score
print(f"--- Optimization Complete ---")
print(f"Final Score: {result.final_score:.4f}")
print(f"Best Prompt Found:\n{result.best_generator.get_prompt_template()}")

# You can also review the history of all tried variations
for i, iteration in enumerate(result.history):
    print(f"\n--- Variation {i+1} ---")
    print(f"Score: {iteration.average_score:.4f}")
    print(f"Prompt: {iteration.prompt}")

5. Next Steps

You’ve successfully optimized your first prompt! From here, you can explore more advanced strategies.
I