Navigate to Run Prompt
Click on the “Run Prompt” button in the top-right corner to begin creating a prompt.





Choose Model Type
Select the model type based on your use-case.
- LLM
- Text-to-Speech
- Speech-to-Text
Choose “LLM” to generate text responses using general-purpose LLM models. Recommended for everyday use-cases.

Configure Prompt with Roles
Define your prompt using roles. You can configure messages with different roles:The variables (column names) will be dynamically replaced with actual values from your dataset when the prompt runs.In this example:
- User Role (Required): The main input message from the user perspective. This role is required for the prompt to work.
- System Role (Optional): System-level instructions that guide the model’s behavior and set the context.
Using Variables
You can reference dataset columns as variables within your prompt using the{{ }} syntax. Simply wrap the column name in double curly braces:Basic Example:JSON Dot Notation
For JSON type columns, you can access nested fields directly using dot notation. This allows you to reference specific keys within structured data without additional processing:JSON Example:{{column_name.key_name}}accesses thekey_namefield within thecolumn_nameJSON column
Configure Model Parameters (optional)
Adjust model parameters such as temperature, max tokens, top_p, and other settings to fine-tune the model’s behavior according to your needs.
Configure Tools (optional)
Add tools or functions that the model can use during execution. This enables the model to perform specific actions or access external capabilities.
Configure Concurrency
Set the concurrency level to control how many prompt executions run in parallel. Higher concurrency speeds up processing but may consume more resources.

