Skip to main content
1

Navigate to Run Prompt

Click on the “Run Prompt” button in the top-right corner to begin creating a prompt.Run PromptRun PromptRun Prompt
2

Assign Prompt Name

Assign a name to your prompt. This name will appear as a new in your dataset.
3

Choose Model Type

Select the model type based on your use-case.
Choose “LLM” to generate text responses using general-purpose LLM models. Recommended for everyday use-cases.LLM
Click here to learn how to create custom models.
4

Configure Prompt with Roles

Define your prompt using roles. You can configure messages with different roles:
  • User Role (Required): The main input message from the user perspective. This role is required for the prompt to work.
  • System Role (Optional): System-level instructions that guide the model’s behavior and set the context.

Using Variables

You can reference dataset columns as variables within your prompt using the {{ }} syntax. Simply wrap the column name in double curly braces:Basic Example:
System: You are a helpful assistant that summarizes content.

User: Please summarize the following text: {{column_name}}
The variables (column names) will be dynamically replaced with actual values from your dataset when the prompt runs.

JSON Dot Notation

For JSON type columns, you can access nested fields directly using dot notation. This allows you to reference specific keys within structured data without additional processing:JSON Example:
User: Based on this prompt: {{column_name.key_name}}, generate a response that addresses {{column_name.key_name}}
In this example:
  • {{column_name.key_name}} accesses the key_name field within the column_name JSON column
This feature significantly simplifies complex data handling and speeds up setup when working with structured JSON data in your dataset.
5

Configure Model Parameters (optional)

Adjust model parameters such as temperature, max tokens, top_p, and other settings to fine-tune the model’s behavior according to your needs.
6

Configure Tools (optional)

Add tools or functions that the model can use during execution. This enables the model to perform specific actions or access external capabilities.
7

Configure Concurrency

Set the concurrency level to control how many prompt executions run in parallel. Higher concurrency speeds up processing but may consume more resources.
8

Run Prompt

Click the “Run” button to execute the prompt across your dataset. The responses will be generated and saved as a new dynamic column in your dataset.