Prompt Optimization

Automatically improve your prompts with 6 SOTA algorithms. Random Search, Bayesian, ProTeGi, Meta-Prompt, PromptWizard, and GEPA.

📝
TL;DR
  • pip install agent-opt — separate package, depends on ai-evaluation
  • 6 algorithms: Random Search, Bayesian, ProTeGi, Meta-Prompt, PromptWizard, GEPA
  • Uses eval metrics as the scoring function to find the best prompt

agent-opt finds the best prompt for your task automatically. For the full platform guide, see Optimization docs. Give it a starting prompt, a dataset, and a scoring metric. It generates variations, scores them, and returns the highest-performing one.

Note

Requires pip install agent-opt. This also installs ai-evaluation and futureagi as dependencies. Python 3.10+.

Quick Example

from fi.opt.generators import LiteLLMGenerator
from fi.opt.optimizers import BayesianSearchOptimizer
from fi.opt.datamappers import BasicDataMapper
from fi.opt.base.evaluator import Evaluator
from fi.evals.metrics import BLEUScore

# 1. Your dataset
dataset = [
    {"context": "Paris is the capital of France", "question": "What is the capital of France?", "answer": "Paris"},
    {"context": "Tokyo is the capital of Japan", "question": "What is the capital of Japan?", "answer": "Tokyo"},
]

# 2. Evaluator — how to score each output
metric = BLEUScore()
evaluator = Evaluator(metric)

# 3. Data mapper — connects optimizer output to evaluator inputs
data_mapper = BasicDataMapper(
    key_map={"response": "generated_output", "expected_response": "answer"}
)

# 4. Optimizer
optimizer = BayesianSearchOptimizer(
    inference_model_name="gpt-4o-mini",
    teacher_model_name="gpt-4o",
    n_trials=10,
)

# 5. Run
initial_prompt = "Given the context: {context}, answer the question: {question}"
result = optimizer.optimize(
    evaluator=evaluator,
    data_mapper=data_mapper,
    dataset=dataset,
    initial_prompts=[initial_prompt],
)

print(f"Best Score: {result.final_score:.4f}")
print(f"Best Prompt: {result.best_generator.get_prompt_template()}")

Algorithms

AlgorithmBest forHow it works
RandomSearchOptimizerQuick baselinesRandom prompt variations
BayesianSearchOptimizerFew-shot tuningOptuna-powered parameter search
ProTeGiIterative refinementTextual gradients — analyzes failures and rewrites
MetaPromptOptimizerTeacher-drivenA stronger model analyzes and rewrites the prompt
PromptWizardOptimizerMulti-stage refinementMutation → critique → refine pipeline
GEPAOptimizerComplex search spacesGenetic Pareto evolutionary optimization
from fi.opt.optimizers import (
    RandomSearchOptimizer,
    BayesianSearchOptimizer,
    ProTeGi,
    MetaPromptOptimizer,
    PromptWizardOptimizer,
    GEPAOptimizer,
)

Core Components

Generator

Wraps an LLM and executes prompts. Use {field_name} placeholders to reference dataset fields.

from fi.opt.generators import LiteLLMGenerator

generator = LiteLLMGenerator(
    model="gpt-4o-mini",
    prompt_template="Given the context: {context}, answer: {question}",
)

Evaluator

Scores each generated output. Pass any metric from fi.evals.metrics.

from fi.opt.base.evaluator import Evaluator
from fi.evals.metrics import BLEUScore, Contains

# Heuristic metric
evaluator = Evaluator(BLEUScore())

# Or a keyword-based metric
evaluator = Evaluator(Contains(config={"keyword": "Python", "case_sensitive": False}))

Data Mapper

Connects evaluator input fields to dataset/generator output fields. The key_map format is {evaluator_field: dataset_or_generator_field}.

from fi.opt.datamappers import BasicDataMapper

# Keys = what the evaluator expects
# Values = where to get it from (dataset field or "generated_output" for generator output)
mapper = BasicDataMapper(key_map={
    "response": "generated_output",       # evaluator's "response" ← generator output
    "expected_response": "answer",        # evaluator's "expected_response" ← dataset "answer" field
})

Result

result = optimizer.optimize(...)

print(result.final_score)                          # best score
print(result.best_generator.get_prompt_template()) # winning prompt
print(result.history)                              # score history
Was this page helpful?

Questions & Discussion