Choosing the Right Optimizer
A practical guide to selecting the best optimization strategy (Bayesian Search, Meta-Prompt, GEPA, etc.) based on your specific task and goals.
Choosing the right optimization algorithm is key to efficiently improving your prompts. Each optimizer in the agent-opt library has a unique strategy, and picking the right one for your specific task will lead to better results, faster.
This cookbook provides a practical comparison and a clear decision guide to help you select the best optimizer for your use case.
Optimizer Comparison at a Glance
This table summarizes the core strategy and ideal use case for each optimizer.
| Optimizer | Core Strategy | When to Use It |
|---|---|---|
| Random Search | Broad Exploration | For quick baselines and generating a wide range of initial ideas. |
| Bayesian Search | Intelligent Example Selection | When your primary goal is to find the best few-shot examples for your prompt. |
| ProTeGi | Error-Driven Debugging | For systematically fixing a good prompt that has specific, identifiable failures. |
| Meta-Prompt | Holistic Analysis & Rewrite | For complex reasoning tasks that require a deep, top-to-bottom refinement of the prompt’s logic. |
| PromptWizard | Creative Multi-Stage Evolution | For creative tasks or when you want to explore different “thinking styles” in your prompt. |
| GEPA | State-of-the-Art Evolutionary Search | For critical, production systems where achieving maximum performance is the top priority. |
A Quick Decision Guide
Follow this decision tree to find the right optimizer for your needs.
1. Is your primary goal to optimize the selection of few-shot examples?
Yes: Use BayesianSearchOptimizer. It’s specifically designed to find the optimal number and combination of examples to include in your prompt.
# BayesianSearchOptimizer focuses on the few-shot block.
optimizer = BayesianSearchOptimizer(
min_examples=2,
max_examples=5,
n_trials=15 # How many combinations to try
) 2. No, I'm optimizing the main instruction. Do you just need a quick baseline or some initial ideas?
Yes: Use RandomSearchOptimizer. It’s the fastest and simplest way to get a baseline and see if improvement is possible.
# RandomSearchOptimizer is great for a quick, broad search.
optimizer = RandomSearchOptimizer(
generator=initial_generator,
teacher_model="gpt-5",
num_variations=10 # Generate 10 random alternatives
) 3. No, I need a more advanced, iterative refinement. Does your prompt have specific, known failure modes?
Yes: Use ProTeGi. It’s designed to function like a debugger, analyzing failures and applying targeted “textual gradient” fixes.
# ProTeGi is for systematic, error-driven fixing.
optimizer = ProTeGi(
teacher_generator=teacher_generator,
num_gradients=3, # Generate 3 critiques of the failures
beam_size=2 # Keep the top 2 candidates each round
) 4. No, my prompt needs a more holistic rewrite. Is it for a complex reasoning task?
Yes: Use MetaPromptOptimizer. It excels at deep analysis, forming a hypothesis about your prompt’s core problem, and rewriting it from the ground up.
# MetaPromptOptimizer performs a deep analysis and full rewrite.
optimizer = MetaPromptOptimizer(
teacher_generator=teacher_generator
) 5. Is this for a critical, production-grade application where you need the absolute best performance and have a larger budget?
Yes: Use GEPAOptimizer. It’s an adapter for a state-of-the-art evolutionary algorithm that provides the most powerful (but also most computationally intensive) optimization.
# GEPA is the most powerful option for achieving SOTA performance.
optimizer = GEPAOptimizer(
reflection_model="gpt-5",
generator_model="gpt-4o-mini",
max_metric_calls=200 # Set a total evaluation budget
) Note
If you’re still unsure, ProTeGi is an excellent and powerful general-purpose choice for improving an existing prompt.
Combining Optimizers for Advanced Workflows
You don’t have to stick to just one optimizer. A powerful pattern is to use them sequentially in a “funnel” approach to find the best possible prompt.
Stage 1: Broad Exploration with Random Search
Start with RandomSearchOptimizer to quickly generate 10-15 diverse prompt ideas and get a rough sense of which direction is most promising. This is fast and cheap.
# Stage 1: Get a diverse set of initial ideas
random_optimizer = RandomSearchOptimizer(generator=initial_generator, num_variations=10)
random_result = random_optimizer.optimize(...)
# Get the top 2-3 prompts from the random search
top_prompts_from_random = [h.prompt for h in random_result.history[:2]] Stage 2: Deep Refinement with ProTeGi or Meta-Prompt
Take the best 2-3 prompts from the exploration stage and feed them as initial_prompts into a more powerful refinement optimizer like ProTeGi or MetaPromptOptimizer. This focuses your expensive, deep analysis only on the most promising candidates.
# Stage 2: Deeply refine the most promising candidates
protegi_optimizer = ProTeGi(teacher_generator=teacher_generator)
meta_result = protegi_optimizer.optimize(
initial_prompts=top_prompts_from_random,
num_rounds=3,
...
)
best_instruction_prompt = meta_result.best_generator.get_prompt_template() Stage 3: Few-Shot Enhancement with Bayesian Search
If your task benefits from few-shot examples, take the best instruction prompt from the refinement stage and use BayesianSearchOptimizer to find the optimal set of examples to add to it.
# Stage 3: Find the best examples to pair with your optimized instruction
bayesian_optimizer = BayesianSearchOptimizer(n_trials=20, max_examples=5)
final_result = bayesian_optimizer.optimize(
initial_prompts=[best_instruction_prompt],
...
)
print(f"Final Optimized Prompt:\n{final_result.best_generator.get_prompt_template()}") By understanding the unique strengths of each optimizer, you can build a sophisticated, multi-stage pipeline to systematically engineer high-performing prompts for any task.