Understanding Prompts
What a prompt is, how it is structured, how variables work, and how prompts connect to models in the Prompt Workbench.
About
A prompt is the instruction you send to a language model to produce a response. It tells the model who it is, what it should do, and what input to work with. Getting the prompt right is one of the most direct ways to improve the quality of your AI product.
In the Prompt Workbench, prompts are managed as templates. A template is a saved, versioned prompt that can be reused across datasets, simulations, experiments, and your application via the SDK.
Structure
A prompt in Future AGI is made up of one or more messages, each with a role:
| Role | Purpose |
|---|---|
| System | Sets the model’s behavior, persona, and constraints. Optional but highly effective for controlling tone and scope. |
| User | The actual input or instruction sent to the model. This is where the task or question lives. |
| Assistant | Used for few-shot examples: you provide sample responses to show the model the format or style you expect. |
Most prompts have at least a system message and a user message. The system message shapes how the model behaves; the user message drives what it produces.
Variables
Variables make a prompt template reusable. Instead of hardcoding specific values, you use placeholders that get replaced with real data at runtime. This lets a single template run against many different inputs without being rewritten.
Syntax
Variables use double curly brace syntax: {{variable_name}}. You can place them anywhere in the system or user message content.
You are a support agent for {{company_name}}.
Answer the following customer question clearly and professionally:
{{customer_question}}
When this prompt is run, {{company_name}} and {{customer_question}} are replaced with the actual values you supply.
How variables are supplied
In the UI: When you run a prompt against a dataset, you map dataset columns to the variable names the template expects.
In the SDK: You pass a dictionary of variable names and values to the compile() method:
compiled = client.compile(
company_name="Acme Corp",
customer_question="How do I reset my password?"
)
const compiled = client.compile({
company_name: "Acme Corp",
customer_question: "How do I reset my password?"
});
The compile() method returns the fully resolved messages, ready to send to a model.
Placeholder messages
For dynamic chat history or multi-turn conversations, you can use a placeholder message instead of a variable inside a string. A placeholder is a special message with type: "placeholder" and a name. At compile time, you supply an array of messages for that key, and they are inserted into the message list at that position.
tpl = PromptTemplate(
name="chat-template",
messages=[
SystemMessage(content="You are a helpful assistant."),
{"type": "placeholder", "name": "history"},
UserMessage(content="{{question}}"),
],
)
compiled = client.compile(
question="What is the refund policy?",
history=[
{"role": "user", "content": "Hi"},
{"role": "assistant", "content": "Hello! How can I help?"}
]
)
This is useful when your prompt needs to include prior conversation turns that are only known at runtime.
Model Configuration
Each prompt template includes a model configuration: the model to use and the parameters that control its output.
| Setting | What it controls |
|---|---|
| Model | Which LLM processes the prompt |
| Temperature | Randomness of the output. Higher values produce more varied responses. |
| Max Tokens | Maximum length of the response |
| Top P | Token selection diversity |
| Presence / Frequency Penalty | Controls repetition in the output |
| Response Format | Output format, e.g. plain text or JSON |
Next Steps
- Versions and Labels: How prompt versioning and deployment labels work.
- Create a Prompt from Scratch: Build your first prompt in the Workbench.
- Prompt SDK: Full SDK reference for compiling and fetching prompt templates.