Overview
Iteratively improve prompts using evaluation-driven feedback and optimization algorithms for higher-quality, more consistent AI responses.
What it is
Agent Optimization provides a structured, iterative approach to refining AI-generated outputs by systematically improving prompts. With the agent-opt Python library, you can programmatically enhance your prompts by adjusting their structure based on evaluation-driven feedback.
This library empowers you to move beyond manual trial-and-error, offering advanced algorithms to achieve higher-quality, more consistent, and more efficient LLM responses.
Purpose
- Systematic refinement — Improve a single prompt over many trials using eval scores instead of guesswork.
- Advanced algorithms — Use 6+ optimization strategies (e.g. Bayesian Search, Meta-Prompt, ProTeGi, GEPA, Random Search, PromptWizard) to explore the prompt space efficiently.
- Few-shot and structure — Let optimizers suggest or format few-shot examples and adjust prompt structure based on feedback.
- Reproducibility — Track optimization runs, trials, and scores so you can version and compare experiments.
- Cost efficiency — Control where optimization runs and use targeted search to reduce unnecessary API calls.
- Platform or code — Run optimization in the UI or via the Python SDK for flexibility.
Getting started with optimization
Optimize Your First Prompt
Optimize your first prompt in minutes using the agent-opt Python library and a simple dataset.
Optimization fundamentals
Learn how optimization works and compare algorithms to choose the right strategy.
Using the Python SDK
Run optimization programmatically with the agent-opt library and advanced algorithms.
Using the platform
Create and run optimizations in the Future AGI UI with datasets and evals.