Algorithm Comparison
Bayesian Search
Smart few-shot optimization
Meta-Prompt
Deep reasoning refinement
ProTeGi
Error-driven improvement
PromptWizard
Creative exploration
GEPA
Evolutionary optimization
Random Search
Quick baseline testing
Quick Selection Guide
Use Case | Recommended Optimizer | Why |
---|---|---|
Few-shot learning tasks | Bayesian Search | Intelligently selects and formats examples |
Complex reasoning tasks | Meta-Prompt | Deep analysis of failures and systematic refinement |
Improving existing prompts | ProTeGi | Focused on identifying and fixing specific errors |
Creative/open-ended tasks | PromptWizard | Explores diverse prompt variations |
Production deployments | GEPA | Robust evolutionary search with efficient budgeting |
Quick experimentation | Random Search | Fast baseline for comparison |
Performance Comparison
Optimizer | Speed | Quality | Cost | Best Dataset Size |
---|---|---|---|---|
Bayesian Search | ⚡⚡ | ⭐⭐⭐⭐ | 💰💰 | 15-50 examples |
Meta-Prompt | ⚡⚡ | ⭐⭐⭐⭐ | 💰💰💰 | 20-40 examples |
ProTeGi | ⚡ | ⭐⭐⭐⭐ | 💰💰💰 | 20-50 examples |
PromptWizard | ⚡ | ⭐⭐⭐⭐ | 💰💰💰 | 15-40 examples |
GEPA | ⚡ | ⭐⭐⭐⭐⭐ | 💰💰💰💰 | 30-100 examples |
Random Search | ⚡⚡⚡ | ⭐⭐ | 💰 | 10-30 examples |
Speed: ⚡ = Slow, ⚡⚡ = Medium, ⚡⚡⚡ = Fast
Quality: ⭐ = Basic, ⭐⭐⭐⭐⭐ = Excellent
Cost: 💰 = Low, 💰💰💰💰 = High (based on API calls)
Quality: ⭐ = Basic, ⭐⭐⭐⭐⭐ = Excellent
Cost: 💰 = Low, 💰💰💰💰 = High (based on API calls)
Optimization Strategies
Search-Based Optimizers
These optimizers explore the prompt space systematically:Random Search
Random Search
How it works: Generates random variations using a teacher model and tests each one.Strengths:
- Very fast to run
- Simple to understand and debug
- Good baseline for comparison
- No learning from previous attempts
- May miss optimal solutions
- Quality depends on teacher model creativity
Bayesian Search
Bayesian Search
How it works: Uses Bayesian optimization to intelligently select few-shot examples and prompt configurations.Strengths:
- Efficient exploration of search space
- Excellent for few-shot learning
- Can infer optimal example templates
- Requires examples in your dataset
- May need many trials for complex spaces
- Best for structured tasks
Refinement-Based Optimizers
These optimizers iteratively improve prompts through analysis:Meta-Prompt
Meta-Prompt
How it works: Analyzes failed examples, formulates hypotheses, and rewrites the entire prompt.Strengths:
- Deep understanding of failures
- Holistic prompt redesign
- Excellent for complex tasks
- Slower than search-based methods
- Higher API costs
- May overfit to evaluation set
ProTeGi
ProTeGi
How it works: Generates critiques of failures and applies targeted improvements using beam search.Strengths:
- Systematic error fixing
- Maintains multiple candidate prompts
- Good balance of exploration and refinement
- Can be computationally expensive
- Requires clear failure signals
- May need several rounds
PromptWizard
PromptWizard
How it works: Combines mutation with different “thinking styles”, then critiques and refines top performers.Strengths:
- Creative exploration
- Structured refinement process
- Diverse prompt variations
- Multiple stages can be slow
- Requires good teacher model
- May generate unconventional prompts
Evolutionary Optimizers
These use evolutionary strategies inspired by natural selection:GEPA
GEPA
How it works: Uses evolutionary algorithms with reflective learning and mutation strategies.Strengths:
- State-of-the-art performance
- Efficient evaluation budgeting
- Robust to local optima
- Production-ready
- Requires external library (
gepa
) - More complex setup
- Higher computational requirements