agent-opt
library to automate the improvement of your workflows. You’ll learn how to set up the necessary components, choose the right optimization strategy, run the process, and analyze the results.
1. Installation
First, install theagent-opt
library using pip:
2. Core Concepts
The library is built around a few key components that work together:Optimizer
The engine that drives the improvement process. You choose an optimizer based on your specific task (e.g.,
BayesianSearchOptimizer
for few-shot tasks or GEPAOptimizer
for complex reasoning).Evaluator
The component responsible for scoring the quality of prompt outputs. It uses a specified model and an evaluation template to judge how well a prompt is performing.
DataMapper
A utility that maps the fields from your dataset to the keys expected by the optimizer and evaluator, ensuring the data flows correctly through the system.
Dataset
A simple list of dictionaries that serves as the ground truth for your optimization. Each item in the list represents a data point for evaluation.
3. Step-by-Step Guide to Optimization
Let’s walk through a complete example of optimizing a summarization workflow.Step 1: Prepare Your Dataset
Your dataset is a standard Python list of dictionaries. Each dictionary should contain the necessary fields for your task. For a summarization task, you might have anarticle
and a target_summary
.
Step 2: Configure the Evaluator
TheEvaluator
scores the outputs generated by your prompts. You need to provide it with an evaluation template and the model to use for scoring.
Step 3: Configure the DataMapper
TheDataMapper
tells the optimizer how to find the input and output values within your dataset.
Step 4: Choose and Initialize an Optimizer
Select an optimizer that fits your use case. For general-purpose refinement,MetaPromptOptimizer
is a great choice.
Not sure which optimizer to use? Check out our Optimizers Overview for a detailed comparison.
Step 5: Run the Optimization
Now, pass all the components to theoptimize
method.
Step 6: Analyze the Results
Theresult
object contains everything you need to understand the outcome.
4. Examples for Different Optimizers
Different tasks benefit from different optimization strategies.Bayesian Search for Few-Shot Optimization
If your task benefits from few-shot examples (e.g., classification, structured data extraction),BayesianSearchOptimizer
is the ideal choice. It intelligently finds the best number and combination of examples.
ProTeGi for Systematic Error Correction
If you have a prompt that fails in specific, identifiable ways,ProTeGi
can systematically debug it. It generates critiques (“textual gradients”) of the failures and applies targeted fixes.