Dataset Optimization: Improve Prompts Directly in Your Dataset
Use the dashboard Optimization tab to run automated prompt improvement on any Run Prompt column: no SDK code required.
Optimize prompts directly in your dataset using the dashboard Optimization tab — configure an optimizer, review trial results with before/after comparisons, and promote the winning prompt.
| Time | Difficulty | Package |
|---|---|---|
| 15 min | Beginner | Dashboard only |
By the end of this guide you will have created a dataset with a Run Prompt column, launched an optimization run from the Optimization tab, reviewed trial results with before/after prompt comparisons, and promoted the winning prompt:
- FutureAGI account → app.futureagi.com
- A dataset with at least one Run Prompt column (see Step 1 if you don’t have one yet)
Install
No packages to install. This guide uses the FutureAGI dashboard only.
Tutorial
Create a dataset with a Run Prompt column
If you already have a dataset with a Run Prompt column, skip to Step 2.
Go to app.futureagi.com → Dataset (left sidebar) → Add Dataset → create a dataset with input columns (e.g., question, context).
Add a Run Prompt dynamic column:
- Click Add Column → select Run Prompt
- Write a prompt template referencing your input columns; for example:
Answer this question using the context: {{question}} Context: {{context}} - Select a model (e.g.,
gpt-4o-mini) - Run the prompt to generate outputs for all rows
The Run Prompt column stores the prompt template and generated outputs. This is what the optimizer will improve.
Tip
See Dynamic Dataset Columns for the full guide on creating Run Prompt columns and other dynamic column types.
Open the Optimization tab
Navigate to your dataset → click the Optimization tab (fourth tab, after Data, Annotations, and Experiments; before Summary).
This tab shows all optimization runs for this dataset. If no runs exist yet, you see an empty state with a Run Optimization button. Once runs exist, the list view shows an Optimize Prompts button in the header.
Configure and launch an optimization run
Click Run Optimization (empty state) or Optimize Prompts (list view header) to open the configuration drawer.
| Field | Value |
|---|---|
| Name | Auto-generated (e.g., Prompt-GEPA-Mar04-1430) — edit if needed |
| Choose Column | Select a Run Prompt column from the dropdown |
| Choose Optimizer | Select an optimization algorithm (see table below) |
| Language Model | The LLM used during optimization (e.g., gpt-4o) |
| Optimizer Config | Parameters specific to the selected optimizer (auto-populated with defaults) |
| Evaluations | Select one or more evaluation templates to score candidates |
Available optimizers
| Optimizer | Config parameters | Best for |
|---|---|---|
| Random Search | num_variations | Quick baseline — generates random prompt variants |
| Bayesian Search | min_examples, max_examples, n_trials | Few-shot example selection and ordering |
| ProTeGi | beam_size, num_gradients, errors_per_gradient, prompts_per_gradient, num_rounds | Targeted prompt edits based on error analysis |
| Meta-Prompt | task_description, num_rounds | General-purpose prompt rewriting |
| PromptWizard | mutate_rounds, refine_iterations, beam_size | Multi-stage mutation, scoring, and critique-refinement |
| GEPA | max_metric_calls | Evolutionary exploration of diverse prompt styles |
Click Start Optimization to launch the run.
Tip
Not sure which optimizer to pick? Start with Meta-Prompt for general improvement or GEPA for diverse exploration. See Compare Optimization Strategies for a hands-on SDK comparison.
Monitor the optimization run
After launching, the Optimization tab shows the run with its current status:
| Status | Meaning |
|---|---|
| Pending | Queued, waiting to start |
| Running | Actively optimizing — auto-refreshes every 5 seconds |
| Completed | All trials finished |
| Failed | An error occurred during optimization |
| Cancelled | You stopped the run manually |
Click the run to see the detail view with:
- Steps: progress through the optimization stages
- Results graph: score progression across trials
- Trials grid: each trial’s score and prompt variant
Review trial results and compare prompts
Click any trial in the grid to open the trial detail view. The detail view has two tabs:
Prompt tab: shows a side-by-side comparison:
- AGENT PROMPT: the baseline prompt from your Run Prompt column
- OPTIMIZED AGENT PROMPT: the variant generated by the optimizer for this trial
- Toggle Show Diff to highlight changes between the two prompts
Trial Items tab: shows the individual iterations the optimizer ran to produce this trial’s prompt, with input/output text and evaluation scores per row.
Review multiple trials to see how different optimization paths produced different prompt structures. The best-scoring trial’s prompt is your candidate for promotion.
Use the optimized prompt
Once you’ve identified the best trial:
- Copy the optimized prompt from the trial detail view
- Update your Run Prompt column’s template with the improved version, or
- Save it to the Prompt Workbench for version control and production serving
To re-run optimization with different settings (e.g., a different optimizer or metric), click Optimize Prompts again from the Optimization tab. Previous runs are preserved for comparison.
Tip
Run the same optimizer with different evaluation metrics to see which metric drives the most useful prompt improvements. See Compare Optimization Strategies for a detailed strategy comparison.
What you built
You can now optimize prompts directly from the dashboard, compare trial results side by side, and promote the best-scoring variant.
- Created a dataset with a Run Prompt column as the optimization target
- Launched an optimization run from the Optimization tab with a selected optimizer, model, and evaluation metric
- Monitored run progress through pending, running, and completed states
- Reviewed trial results with side-by-side Agent Prompt vs. Optimized Agent Prompt comparisons and diff highlighting
- Identified the best-scoring prompt variant for production use