Skip to main content
The Prompt Optimizer library provides six different optimization algorithms, each with unique strengths and approaches to improving prompts. This guide helps you understand what each optimizer does and when to use it.

Algorithm Comparison


Quick Selection Guide

Use CaseRecommended OptimizerWhy
Few-shot learning tasksBayesian SearchIntelligently selects and formats examples
Complex reasoning tasksMeta-PromptDeep analysis of failures and systematic refinement
Improving existing promptsProTeGiFocused on identifying and fixing specific errors
Creative/open-ended tasksPromptWizardExplores diverse prompt variations
Production deploymentsGEPARobust evolutionary search with efficient budgeting
Quick experimentationRandom SearchFast baseline for comparison

Performance Comparison

OptimizerSpeedQualityCostBest Dataset Size
Bayesian Search⚡⚡⭐⭐⭐⭐💰💰15-50 examples
Meta-Prompt⚡⚡⭐⭐⭐⭐💰💰💰20-40 examples
ProTeGi⭐⭐⭐⭐💰💰💰20-50 examples
PromptWizard⭐⭐⭐⭐💰💰💰15-40 examples
GEPA⭐⭐⭐⭐⭐💰💰💰💰30-100 examples
Random Search⚡⚡⚡⭐⭐💰10-30 examples
Speed: ⚡ = Slow, ⚡⚡ = Medium, ⚡⚡⚡ = Fast
Quality: ⭐ = Basic, ⭐⭐⭐⭐⭐ = Excellent
Cost: 💰 = Low, 💰💰💰💰 = High (based on API calls)

Optimization Strategies

Search-Based Optimizers

These optimizers explore the prompt space systematically:

Refinement-Based Optimizers

These optimizers iteratively improve prompts through analysis:
How it works: Analyzes failed examples, formulates hypotheses, and rewrites the entire prompt.Strengths:
  • Deep understanding of failures
  • Holistic prompt redesign
  • Excellent for complex tasks
Limitations:
  • Slower than search-based methods
  • Higher API costs
  • May overfit to evaluation set
How it works: Generates critiques of failures and applies targeted improvements using beam search.Strengths:
  • Systematic error fixing
  • Maintains multiple candidate prompts
  • Good balance of exploration and refinement
Limitations:
  • Can be computationally expensive
  • Requires clear failure signals
  • May need several rounds
How it works: Combines mutation with different “thinking styles”, then critiques and refines top performers.Strengths:
  • Creative exploration
  • Structured refinement process
  • Diverse prompt variations
Limitations:
  • Multiple stages can be slow
  • Requires good teacher model
  • May generate unconventional prompts

Evolutionary Optimizers

These use evolutionary strategies inspired by natural selection:
How it works: Uses evolutionary algorithms with reflective learning and mutation strategies.Strengths:
  • State-of-the-art performance
  • Efficient evaluation budgeting
  • Robust to local optima
  • Production-ready
Limitations:
  • Requires external library (gepa)
  • More complex setup
  • Higher computational requirements
Note: GEPA is a powerful external library integrated into our framework.

Choosing the Right Optimizer

Decision Tree

Do you need production-grade optimization?
├─ Yes → Use GEPA
└─ No

   Do you have few-shot examples in your dataset?
   ├─ Yes → Use Bayesian Search
   └─ No

      Is your task reasoning-heavy or complex?
      ├─ Yes → Use Meta-Prompt
      └─ No

         Do you have clear failure patterns to fix?
         ├─ Yes → Use ProTeGi
         └─ No

            Do you want creative exploration?
            ├─ Yes → Use PromptWizard
            └─ No → Use Random Search (baseline)

Combining Optimizers

You can run multiple optimizers sequentially for best results:
# Stage 1: Quick exploration with Random Search
random_result = random_optimizer.optimize(...)
initial_prompts = [h.prompt for h in random_result.history[:3]]

# Stage 2: Deep refinement with Meta-Prompt
meta_result = meta_optimizer.optimize(
    initial_prompts=initial_prompts,
    ...
)

# Stage 3: Few-shot enhancement with Bayesian Search
final_result = bayesian_optimizer.optimize(
    initial_prompts=[meta_result.best_generator.get_prompt_template()],
    ...
)

Next Steps