Create and Manage Prompt Templates (SDK)
Create and manage prompt templates using the Future AGI SDK. Define versioned prompts with variables, labels, and model configuration parameters.
Create and manage prompt templates
Prompt templates let you define reusable, versioned prompts with dynamic variables.
- Labels: Production, Staging, Development + custom, to control deployments per version
- Name-based management: Manage templates, versions, and labels by names (no IDs)
- Placeholders & compile: Assemble prompts with variables and inlined message blocks
- Safe behavior: Labels cannot be assigned to drafts; assignments are queued until commit
Template structure
Basic components
- Name: unique identifier (required)
- Messages: ordered list of messages
- Model configuration: model + generation params
- Variables: dynamic placeholders used in messages
Message types
- System: sets behavior/context
- User: contains the prompt; supports variables like
{{var}} - Assistant: few-shot examples or expected outputs
{ "role": "system", "content": "You are a helpful assistant." }
{ "role": "user", "content": "Introduce {{name}} from {{city}}." }
{ "role": "assistant", "content": "Meet Ada from Berlin!" }
Model configuration fields
model_name, temperature, frequency_penalty, presence_penalty, max_tokens, top_p, response_format, tool_choice, tools
Placeholders and compile
Add a placeholder message (type="placeholder", name="...") in your template. At compile time, supply an array of messages for that key; {{var}} variables are substituted in all message contents.
import { PromptTemplate, ModelConfig, MessageBase, Prompt } from "@futureagi/sdk";
const tpl = new PromptTemplate({
name: "chat-template",
messages: [
{ role: "system", content: "You are a helpful assistant." } as MessageBase,
{ role: "user", content: "Hello {{name}}!" } as MessageBase,
{ type: "placeholder", name: "history" } as any, // placeholder
],
model_configuration: new ModelConfig({ model_name: "gpt-4o-mini" }),
});
const client = new Prompt(tpl);
// Compile with substitution and inlined chat history
const compiled = client.compile({
name: "Alice",
history: [{ role: "user", content: "Ping {{name}}" }],
} as any);from fi.prompt.types import PromptTemplate, SystemMessage, UserMessage, ModelConfig
from fi.prompt.client import Prompt
tpl = PromptTemplate(
name="chat-template",
messages=[
SystemMessage(content="You are a helpful assistant."),
UserMessage(content="Hello {{name}}!"),
{"type": "placeholder", "name": "history"},
],
model_configuration=ModelConfig(model_name="gpt-4o-mini"),
)
client = Prompt(template=tpl)
compiled = client.compile(name="Alice", history=[{"role": "user", "content": "Ping {{name}}"}]) Create templates
import { Prompt, PromptTemplate, ModelConfig, MessageBase } from "@futureagi/sdk";
const tpl = new PromptTemplate({
name: "intro-template",
messages: [
{ role: "system", content: "You are a helpful assistant." } as MessageBase,
{ role: "user", content: "Introduce {{name}} from {{city}}." } as MessageBase,
],
variable_names: { name: ["Ada"], city: ["Berlin"] },
model_configuration: new ModelConfig({ model_name: "gpt-4o-mini" }),
});
const client = new Prompt(tpl);
await client.open(); // draft v1
await client.commitCurrentVersion("Finish v1", true); // set defaultfrom fi.prompt.types import PromptTemplate, SystemMessage, UserMessage, ModelConfig
from fi.prompt.client import Prompt
tpl = PromptTemplate(
name="intro-template",
messages=[
SystemMessage(content="You are a helpful assistant."),
UserMessage(content="Introduce {{name}} from {{city}}."),
],
variable_names={"name": ["Ada"], "city": ["Berlin"]},
model_configuration=ModelConfig(model_name="gpt-4o-mini"),
)
client = Prompt(template=tpl).create() # draft v1
client.commit_current_version(message="Finish v1", set_default=True) Versioning (step-by-step)
- Build the template (see above)
- Create draft v1 (JS/TS:
await client.open(); Python:client.create()) - Update draft & save (JS/TS:
saveCurrentDraft(); Python:save_current_draft()) - Commit v1 and set default (JS/TS:
commitCurrentVersion("msg", true); Python:commit_current_version) - Open a new draft (JS/TS:
createNewVersion(); Python:create_new_version()) - Delete if needed (JS/TS:
delete(); Python:delete())
Labels (deployment control)
- System labels: Production, Staging, Development (predefined by backend)
- Custom labels: create explicitly and assign to versions
- Name-based APIs: manage by names (no IDs needed)
- Draft safety: cannot assign labels to drafts; assignments are queued and applied on commit
Assign labels
// Assign by instance (current project)
await client.labels().assign("Production", "v1");
await client.labels().assign("Staging", "v2");
// Create and assign a custom label
await client.labels().create("Canary");
await client.labels().assign("Canary", "v2");
// Class helpers by names (org-wide context)
await Prompt.assignLabelToTemplateVersion("intro-template", "v2", "Development");# Assign by instance
client.assign_label("Production", version="v1")
client.assign_label("Staging", version="v2")
# Create and assign a custom label
client.create_label("Canary")
client.assign_label("Canary", version="v2")
# Class helpers by names
Prompt.assign_label_to_template_version(template_name="intro-template", version="v2", label="Development") Remove labels
await client.labels().remove("Canary", "v2");
await Prompt.removeLabelFromTemplateVersion("intro-template", "v2", "Development");client.remove_label("Canary", version="v2")
Prompt.remove_label_from_template_version(template_name="intro-template", version="v2", label="Development") List labels and mappings
const labels = await client.labels().list(); // system + custom
const mapping = await Prompt.getTemplateLabels({ template_name: "intro-template" });labels = client.list_labels()
mapping = Prompt.get_template_labels(template_name="intro-template") Fetch by name + label (or version)
Note
- Precedence: version > label
- Python default: if no label is provided, defaults to
“production”
import { Prompt } from "@futureagi/sdk";
const tplByLabel = await Prompt.getTemplateByName("intro-template", { label: "Production" });
const tplByVersion = await Prompt.getTemplateByName("intro-template", { version: "v2" });from fi.prompt.client import Prompt
tpl_by_label = Prompt.get_template_by_name("intro-template", label="Production")
tpl_by_version = Prompt.get_template_by_name("intro-template", version="v2") A/B testing with labels (compile -> OpenAI gpt‑4o)
Fetch two labeled versions of the same template (e.g., prod-a and prod-b), randomly select one, compile variables, and send the compiled messages to OpenAI.
Note
The compile() API replaces {{var}} in string contents and preserves structured contents. Ensure your template contains the variables you pass (e.g., {{name}}, {{city}}).
import OpenAI from "openai";
import { Prompt, PromptTemplate } from "@futureagi/sdk";
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY! });
// Fetch both label variants
const [tplA, tplB] = await Promise.all([
Prompt.getTemplateByName("my-template-name", { label: "prod-a" }),
Prompt.getTemplateByName("my-template-name", { label: "prod-b" }),
]);
// Randomly select a variant
const selected = Math.random() < 0.5 ? tplA : tplB;
const client = new Prompt(selected as PromptTemplate);
// Compile variables into the template messages
const compiled = client.compile({ name: "Ada", city: "Berlin" });
// Send to OpenAI gpt-4o
const completion = await openai.chat.completions.create({
model: "gpt-4o",
messages: compiled as any,
});
const resultText = completion.choices[0]?.message?.content;import os
import random
from openai import OpenAI
from fi.prompt.client import Prompt
openai_client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
# Fetch both label variants
tpl_a = Prompt.get_template_by_name("my-template-name", label="prod-a")
tpl_b = Prompt.get_template_by_name("my-template-name", label="prod-b")
# Randomly select a variant
selected_tpl = tpl_a if random.random() < 0.5 else tpl_b
client = Prompt(template=selected_tpl)
# Compile variables into the template messages
compiled = client.compile(name="Ada", city="Berlin")
# Send to OpenAI gpt-4o
response = openai_client.chat.completions.create(
model="gpt-4o",
messages=compiled,
)
result_text = response.choices[0].message.content Note
For analytics, attach the selected label/version to your logs or tracing so A/B results can be compared.
Linked Traces
Linking prompts to traces is essential for monitoring and improving the performance of your language model applications. By establishing this connection, you can track metrics and evaluations for each prompt version, facilitating iterative enhancements over time.
How to Link Prompts to Traces
To link prompts to traces, you need to associate the prompt used in a generation with the corresponding trace. This process has been highlighted here.
Metrics and Analytics
After linking prompts to traces, you can access various metrics to evaluate performance:
- Median Latency: Time taken for the model to generate a response
- Median Input Tokens: Number of tokens in the input prompt
- Median Output Tokens: Number of tokens in the generated response
- Median Costs: Cost associated with the generation process
- Traces Count: Total number of generations for a specific prompt
- First and Last Generation Timestamp: Timeframe of the generations
These metrics are accessible by navigating to your prompt in the Future AGI dashboard and viewing the Metrics tab.
Prompt Folders
Prompt folders provide a powerful way to organize and categorize your prompt templates by grouping related prompts together. This organizational system enables teams to efficiently manage extensive prompt libraries while maintaining clear structure across diverse use cases and projects.
Creating Folders
You can create folders using the UI’s new folder button, which allows you to:
- Group Related Prompts: Organize prompts by functionality, team, or project
- Improve Navigation: Make it easier to find specific prompt templates
- Maintain Structure: Keep your prompt library organized as it grows
- Team Collaboration: Share folder structures across team members
Prompt Templates
Prompt templates serve as standardized, reusable prompt structures for consistent AI interactions. They provide a systematic approach to prompt management, enabling teams to maintain uniformity across applications while facilitating iterative development and collaborative workflows.
Future AGI provides a comprehensive library of pre-built prompt templates to accelerate your development process. They can be viewed under use template section.
Core Benefits
- Standardization: Ensure consistent prompt structure and behavior across different use cases
- Reusability: Create once, deploy everywhere with dynamic variable substitution
- Collaboration: Enable team-based development with shared templates and review processes
- Performance Optimization: Track metrics and analytics to continuously improve prompt effectiveness
Quick reference
- Dedicated endpoints: labels are not sent via metadata
- Draft blocking: label assignments to drafts are queued and applied post-commit
- Name-based APIs: templates, versions, and labels referenced by names
- Compile: supports placeholders and structured content with
{{var}}substitution - Linked traces: automatic and manual linking of prompts to traces for monitoring and analytics
Note
This flow reflects the new backend behavior and provides parity between the JavaScript/TypeScript and Python SDKs.