Random Search Optimizer

Understand the Random Search optimizer, a simple and effective gradient-free method for establishing a baseline in prompt optimization by exploring random variations.

Random Search is a gradient-free method that generates a set of random variations of an initial prompt using a powerful “teacher” LLM. It then evaluates each variation against a dataset and selects the best-performing one. It’s a fast, straightforward, and often surprisingly effective way to explore different prompt phrasings and establish a strong performance baseline.


✅ Best For

  • Establishing a quick baseline
  • Simple tasks like summarization or classification
  • Broad, unbiased exploration of the prompt space
  • Projects with a low computational budget

❌ Not Ideal For

  • Complex, nuanced, or multi-step reasoning tasks
  • Directed, efficient optimization when failure modes are known
  • Tasks requiring highly structured or constrained prompts
  • Finding the absolute, state-of-the-art best prompt

How It Works

The Random Search process is simple and effective, involving three main steps:

1. Generate Variations

You provide an initial prompt. The optimizer then uses a powerful teacher_model (like GPT-4o) to generate a specified num_variations of diverse rewrites of that prompt.

2. Evaluate All Variations

The optimizer iterates through each generated variation. For each one, it generates outputs for all examples in your dataset and scores them using the provided evaluator.

3. Select the Best

The variation that achieves the highest average score across the entire dataset is chosen as the best prompt. The process concludes, and this top-performing prompt is returned.


Basic Usage

from fi.opt.optimizers import RandomSearchOptimizer
from fi.opt.generators import LiteLLMGenerator
from fi.opt.datamappers import BasicDataMapper
from fi.opt.base.evaluator import Evaluator

# 1. Define the generator with the initial prompt to be optimized
initial_generator = LiteLLMGenerator(
    model="gpt-4o-mini",
    prompt_template="Summarize this article: {article}"
)

# 2. Setup the evaluator to score prompt performance
evaluator = Evaluator(
    eval_template="summary_quality",
    eval_model_name="turing_flash",
    fi_api_key="your_key",
    fi_secret_key="your_secret"
)

# 3. Setup the data mapper
data_mapper = BasicDataMapper(
    key_map={"input": "article", "output": "generated_output"}
)

# 4. Initialize the Random Search optimizer
# It needs the generator to optimize, a powerful teacher model, and the number of variations to try.
optimizer = RandomSearchOptimizer(
    generator=initial_generator,
    teacher_model="gpt-4o",
    num_variations=10
)

# 5. Run the optimization
result = optimizer.optimize(
    evaluator=evaluator,
    data_mapper=data_mapper,
    dataset=my_dataset
)

print(f"Best prompt found: {result.best_generator.get_prompt_template()}")
print(f"Final score: {result.final_score:.4f}")

Parameters

ParameterTypeDefaultDescription
generatorBaseGeneratorrequiredGenerator to optimize (prompt template is modified)
teacher_modelstrgpt-5Model that generates variations (e.g. gpt-4o, claude-3-opus)
num_variationsint5Number of prompt variations to generate and evaluate
teacher_model_kwargsdictExtra args for teacher (e.g. temperature for diversity)

Tips: Use a strong teacher; start with num_variations=5 then 10–20. Similar scores: increase variations or check evaluator. Similar rewrites: raise temperature in teacher_model_kwargs.


Underlying Research

Random search is a foundational technique in hyperparameter tuning, valued for its simplicity and surprising effectiveness.

  • Baseline strength: Random Sampling as a Strong Baseline for Prompt Optimisation shows that simple random sampling can be highly competitive for improving prompts.
  • Use in toolkits: It is often the first step in prompt optimization to explore the landscape and avoid local optima in the discrete, high-dimensional space of prompt engineering.

Next steps

Was this page helpful?

Questions & Discussion