Skip to main content
Choosing the right optimization algorithm is key to efficiently improving your prompts. Each optimizer in the agent-opt library has a unique strategy, and picking the right one for your specific task will lead to better results, faster. This cookbook provides a practical comparison and a clear decision guide to help you select the best optimizer for your use case.

Optimizer Comparison at a Glance

This table summarizes the core strategy and ideal use case for each optimizer.
OptimizerCore StrategyWhen to Use It
Random SearchBroad ExplorationFor quick baselines and generating a wide range of initial ideas.
Bayesian SearchIntelligent Example SelectionWhen your primary goal is to find the best few-shot examples for your prompt.
ProTeGiError-Driven DebuggingFor systematically fixing a good prompt that has specific, identifiable failures.
Meta-PromptHolistic Analysis & RewriteFor complex reasoning tasks that require a deep, top-to-bottom refinement of the prompt’s logic.
PromptWizardCreative Multi-Stage EvolutionFor creative tasks or when you want to explore different “thinking styles” in your prompt.
GEPAState-of-the-Art Evolutionary SearchFor critical, production systems where achieving maximum performance is the top priority.

A Quick Decision Guide

Follow this decision tree to find the right optimizer for your needs.
1

1. Is your primary goal to optimize the selection of few-shot examples?

Yes: Use BayesianSearchOptimizer. It’s specifically designed to find the optimal number and combination of examples to include in your prompt.
# BayesianSearchOptimizer focuses on the few-shot block.
optimizer = BayesianSearchOptimizer(
    min_examples=2,
    max_examples=5,
    n_trials=15 # How many combinations to try
)
2

2. No, I'm optimizing the main instruction. Do you just need a quick baseline or some initial ideas?

Yes: Use RandomSearchOptimizer. It’s the fastest and simplest way to get a baseline and see if improvement is possible.
# RandomSearchOptimizer is great for a quick, broad search.
optimizer = RandomSearchOptimizer(
    generator=initial_generator,
    teacher_model="gpt-5",
    num_variations=10 # Generate 10 random alternatives
)
3

3. No, I need a more advanced, iterative refinement. Does your prompt have specific, known failure modes?

Yes: Use ProTeGi. It’s designed to function like a debugger, analyzing failures and applying targeted “textual gradient” fixes.
# ProTeGi is for systematic, error-driven fixing.
optimizer = ProTeGi(
    teacher_generator=teacher_generator,
    num_gradients=3, # Generate 3 critiques of the failures
    beam_size=2      # Keep the top 2 candidates each round
)
4

4. No, my prompt needs a more holistic rewrite. Is it for a complex reasoning task?

Yes: Use MetaPromptOptimizer. It excels at deep analysis, forming a hypothesis about your prompt’s core problem, and rewriting it from the ground up.
# MetaPromptOptimizer performs a deep analysis and full rewrite.
optimizer = MetaPromptOptimizer(
    teacher_generator=teacher_generator
)
5

5. Is this for a critical, production-grade application where you need the absolute best performance and have a larger budget?

Yes: Use GEPAOptimizer. It’s an adapter for a state-of-the-art evolutionary algorithm that provides the most powerful (but also most computationally intensive) optimization.
# GEPA is the most powerful option for achieving SOTA performance.
optimizer = GEPAOptimizer(
    reflection_model="gpt-5",
    generator_model="gpt-4o-mini",
    max_metric_calls=200 # Set a total evaluation budget
)
If you’re still unsure, ProTeGi is an excellent and powerful general-purpose choice for improving an existing prompt.

Combining Optimizers for Advanced Workflows

You don’t have to stick to just one optimizer. A powerful pattern is to use them sequentially in a “funnel” approach to find the best possible prompt.
Take the best 2-3 prompts from the exploration stage and feed them as initial_prompts into a more powerful refinement optimizer like ProTeGi or MetaPromptOptimizer. This focuses your expensive, deep analysis only on the most promising candidates.
# Stage 2: Deeply refine the most promising candidates
protegi_optimizer = ProTeGi(teacher_generator=teacher_generator)
meta_result = protegi_optimizer.optimize(
    initial_prompts=top_prompts_from_random,
    num_rounds=3,
    ...
)
best_instruction_prompt = meta_result.best_generator.get_prompt_template()
By understanding the unique strengths of each optimizer, you can build a sophisticated, multi-stage pipeline to systematically engineer high-performing prompts for any task.

Next Steps

I