Dataset Optimization: Improve Prompts Directly in Your Dataset

Use the dashboard Optimization tab to run automated prompt improvement on any Run Prompt column: no SDK code required.

📝
TL;DR

Optimize prompts directly in your dataset using the dashboard Optimization tab — configure an optimizer, review trial results with before/after comparisons, and promote the winning prompt.

TimeDifficultyPackage
15 minBeginnerDashboard only

By the end of this guide you will have created a dataset with a Run Prompt column, launched an optimization run from the Optimization tab, reviewed trial results with before/after prompt comparisons, and promoted the winning prompt:

Prerequisites
  • FutureAGI account → app.futureagi.com
  • A dataset with at least one Run Prompt column (see Step 1 if you don’t have one yet)

Install

No packages to install. This guide uses the FutureAGI dashboard only.

Tutorial

Create a dataset with a Run Prompt column

If you already have a dataset with a Run Prompt column, skip to Step 2.

Go to app.futureagi.comDataset (left sidebar) → Add Dataset → create a dataset with input columns (e.g., question, context).

Add a Run Prompt dynamic column:

  1. Click Add Column → select Run Prompt
  2. Write a prompt template referencing your input columns; for example: Answer this question using the context: {{question}} Context: {{context}}
  3. Select a model (e.g., gpt-4o-mini)
  4. Run the prompt to generate outputs for all rows

The Run Prompt column stores the prompt template and generated outputs. This is what the optimizer will improve.

Tip

See Dynamic Dataset Columns for the full guide on creating Run Prompt columns and other dynamic column types.

Open the Optimization tab

Navigate to your dataset → click the Optimization tab (fourth tab, after Data, Annotations, and Experiments; before Summary).

This tab shows all optimization runs for this dataset. If no runs exist yet, you see an empty state with a Run Optimization button. Once runs exist, the list view shows an Optimize Prompts button in the header.

Configure and launch an optimization run

Click Run Optimization (empty state) or Optimize Prompts (list view header) to open the configuration drawer.

FieldValue
NameAuto-generated (e.g., Prompt-GEPA-Mar04-1430) — edit if needed
Choose ColumnSelect a Run Prompt column from the dropdown
Choose OptimizerSelect an optimization algorithm (see table below)
Language ModelThe LLM used during optimization (e.g., gpt-4o)
Optimizer ConfigParameters specific to the selected optimizer (auto-populated with defaults)
EvaluationsSelect one or more evaluation templates to score candidates

Available optimizers

OptimizerConfig parametersBest for
Random Searchnum_variationsQuick baseline — generates random prompt variants
Bayesian Searchmin_examples, max_examples, n_trialsFew-shot example selection and ordering
ProTeGibeam_size, num_gradients, errors_per_gradient, prompts_per_gradient, num_roundsTargeted prompt edits based on error analysis
Meta-Prompttask_description, num_roundsGeneral-purpose prompt rewriting
PromptWizardmutate_rounds, refine_iterations, beam_sizeMulti-stage mutation, scoring, and critique-refinement
GEPAmax_metric_callsEvolutionary exploration of diverse prompt styles

Click Start Optimization to launch the run.

Tip

Not sure which optimizer to pick? Start with Meta-Prompt for general improvement or GEPA for diverse exploration. See Compare Optimization Strategies for a hands-on SDK comparison.

Monitor the optimization run

After launching, the Optimization tab shows the run with its current status:

StatusMeaning
PendingQueued, waiting to start
RunningActively optimizing — auto-refreshes every 5 seconds
CompletedAll trials finished
FailedAn error occurred during optimization
CancelledYou stopped the run manually

Click the run to see the detail view with:

  • Steps: progress through the optimization stages
  • Results graph: score progression across trials
  • Trials grid: each trial’s score and prompt variant

Review trial results and compare prompts

Click any trial in the grid to open the trial detail view. The detail view has two tabs:

Prompt tab: shows a side-by-side comparison:

  • AGENT PROMPT: the baseline prompt from your Run Prompt column
  • OPTIMIZED AGENT PROMPT: the variant generated by the optimizer for this trial
  • Toggle Show Diff to highlight changes between the two prompts

Trial Items tab: shows the individual iterations the optimizer ran to produce this trial’s prompt, with input/output text and evaluation scores per row.

Review multiple trials to see how different optimization paths produced different prompt structures. The best-scoring trial’s prompt is your candidate for promotion.

Use the optimized prompt

Once you’ve identified the best trial:

  1. Copy the optimized prompt from the trial detail view
  2. Update your Run Prompt column’s template with the improved version, or
  3. Save it to the Prompt Workbench for version control and production serving

To re-run optimization with different settings (e.g., a different optimizer or metric), click Optimize Prompts again from the Optimization tab. Previous runs are preserved for comparison.

Tip

Run the same optimizer with different evaluation metrics to see which metric drives the most useful prompt improvements. See Compare Optimization Strategies for a detailed strategy comparison.

What you built

You can now optimize prompts directly from the dashboard, compare trial results side by side, and promote the best-scoring variant.

  • Created a dataset with a Run Prompt column as the optimization target
  • Launched an optimization run from the Optimization tab with a selected optimizer, model, and evaluation metric
  • Monitored run progress through pending, running, and completed states
  • Reviewed trial results with side-by-side Agent Prompt vs. Optimized Agent Prompt comparisons and diff highlighting
  • Identified the best-scoring prompt variant for production use
Was this page helpful?

Questions & Discussion