Prompt Workbench Using SDK
Create, version, and run prompt templates programmatically using the Future AGI SDK (TypeScript/JavaScript or Python).
What it is
Prompt Workbench Using SDK lets you manage prompt templates from code: define templates (messages, variables, model config), create and version them in the Workbench, assign labels for deployment (e.g. Production, Staging), fetch templates by name and label or version, compile variables into messages, and run A/B tests by fetching different labeled versions. You use the Future AGI SDK (TypeScript/JavaScript or Python) so prompts can live in version control and be deployed without using the UI.
Use cases
- Version control and CI/CD — Define prompts in code, create/commit versions via SDK, and deploy with labels (Production, Staging).
- Fetch by name and label — Resolve the right prompt at runtime by template name and label (or version) without storing IDs.
- A/B testing — Fetch two labeled versions of a template, randomly choose one, compile, and send to your model; attach label to traces for comparison.
- Placeholders and dynamic history — Use placeholder messages in templates and supply chat history (or other message lists) at compile time.
Installation
npm install @future-agi/sdkpip install futureagi Note
The Python package is installed as futureagi but imported as fi (e.g. from fi.prompt.client import Prompt).
Template structure
Basic components
- Name: unique identifier (required)
- Messages: ordered list of messages
- Model configuration: model + generation params
- Variables: dynamic placeholders used in messages
Message types
- System: sets behavior/context
- User: contains the prompt; supports variables like
{{var}} - Assistant: few-shot examples or expected outputs
{ "role": "system", "content": "You are a helpful assistant." }
{ "role": "user", "content": "Introduce {{name}} from {{city}}." }
{ "role": "assistant", "content": "Meet Ada from Berlin!" }
Model configuration fields
model_name, temperature, frequency_penalty, presence_penalty, max_tokens, top_p, response_format, tool_choice, tools
Placeholders and compile
Add a placeholder message (type="placeholder", name="...") in your template. At compile time, supply an array of messages for that key; {{var}} variables are substituted in all message contents.
import { PromptTemplate, ModelConfig, MessageBase, Prompt } from "@future-agi/sdk";
const tpl = new PromptTemplate({
name: "chat-template",
messages: [
{ role: "system", content: "You are a helpful assistant." } as MessageBase,
{ role: "user", content: "Hello {{name}}!" } as MessageBase,
{ type: "placeholder", name: "history" } as any, // placeholder
],
model_configuration: new ModelConfig({ model_name: "gpt-4o-mini" }),
});
const client = new Prompt(tpl);
// Compile with substitution and inlined chat history
const compiled = client.compile({
name: "Alice",
history: [{ role: "user", content: "Ping {{name}}" }],
} as any);from fi.prompt import Prompt, PromptTemplate, ModelConfig, SystemMessage, UserMessage
tpl = PromptTemplate(
name="chat-template",
messages=[
SystemMessage(content="You are a helpful assistant."),
UserMessage(content="Hello {{name}}!"),
{"type": "placeholder", "name": "history"},
],
model_configuration=ModelConfig(model_name="gpt-4o-mini"),
)
client = Prompt(template=tpl)
compiled = client.compile(name="Alice", history=[{"role": "user", "content": "Ping {{name}}"}]) Create templates
import { Prompt, PromptTemplate, ModelConfig, MessageBase } from "@future-agi/sdk";
const tpl = new PromptTemplate({
name: "intro-template",
messages: [
{ role: "system", content: "You are a helpful assistant." } as MessageBase,
{ role: "user", content: "Introduce {{name}} from {{city}}." } as MessageBase,
],
variable_names: { name: ["Ada"], city: ["Berlin"] },
model_configuration: new ModelConfig({ model_name: "gpt-4o-mini" }),
});
const client = new Prompt(tpl);
await client.open(); // draft v1
await client.commitCurrentVersion("Finish v1", true); // set defaultfrom fi.prompt import Prompt, PromptTemplate, ModelConfig, SystemMessage, UserMessage
tpl = PromptTemplate(
name="intro-template",
messages=[
SystemMessage(content="You are a helpful assistant."),
UserMessage(content="Introduce {{name}} from {{city}}."),
],
variable_names={"name": ["Ada"], "city": ["Berlin"]},
model_configuration=ModelConfig(model_name="gpt-4o-mini"),
)
client = Prompt(template=tpl).create() # draft v1
client.commit_current_version(message="Finish v1", set_default=True) Versioning (step-by-step)
- Build the template (see above)
- Create draft v1 (JS/TS:
await client.open(); Python:client.create()) - Update draft & save (JS/TS:
saveCurrentDraft(); Python:save_current_draft()) - Commit v1 and set default (JS/TS:
commitCurrentVersion("msg", true); Python:commit_current_version) - Open a new draft (JS/TS:
createNewVersion(); Python:create_new_version()) - Delete if needed (JS/TS:
delete(); Python:delete())
Labels (deployment control)
- System labels: Production, Staging, Development (predefined by backend)
- Custom labels: create explicitly and assign to versions
- Name-based APIs: manage by names (no IDs needed)
- Draft safety: cannot assign labels to drafts; assignments are queued and applied on commit
Assign labels
// Assign by instance (current project)
await client.labels().assign("Production", "v1");
await client.labels().assign("Staging", "v2");
// Create and assign a custom label
await client.labels().create("Canary");
await client.labels().assign("Canary", "v2");
// Class helpers by names (org-wide context)
await Prompt.assignLabelToTemplateVersion("intro-template", "v2", "Development");# Assign by instance
client.assign_label("Production", version="v1")
client.assign_label("Staging", version="v2")
# Create and assign a custom label
client.create_label("Canary")
client.assign_label("Canary", version="v2")
# Class helpers by names
Prompt.assign_label_to_template_version(template_name="intro-template", version="v2", label="Development") Remove labels
await client.labels().remove("Canary", "v2");
await Prompt.removeLabelFromTemplateVersion("intro-template", "v2", "Development");client.remove_label("Canary", version="v2")
Prompt.remove_label_from_template_version(template_name="intro-template", version="v2", label="Development") List labels and mappings
const labels = await client.labels().list(); // system + custom
const mapping = await Prompt.getTemplateLabels({ template_name: "intro-template" });labels = client.list_labels()
mapping = Prompt.get_template_labels(template_name="intro-template") Fetch by name + label (or version)
Note
- Precedence: version > label
- Python default: if no label is provided, defaults to
“production” - Return type:
get_template_by_name()returns aPromptinstance (not a rawPromptTemplate). In Python you can call.compile()directly on it; in TypeScript you wrap the returned template innew Prompt(tpl)then call.compile().
import { Prompt } from "@future-agi/sdk";
const tplByLabel = await Prompt.getTemplateByName("intro-template", { label: "Production" });
const tplByVersion = await Prompt.getTemplateByName("intro-template", { version: "v2" });from fi.prompt import Prompt
tpl_by_label = Prompt.get_template_by_name("intro-template", label="Production")
tpl_by_version = Prompt.get_template_by_name("intro-template", version="v2") A/B testing with labels (compile → OpenAI gpt-4o)
Fetch two labeled versions of the same template (e.g., prod-a and prod-b), randomly select one, compile variables, and send the compiled messages to OpenAI.
Note
The compile() API replaces {{var}} in string contents and preserves structured contents. Ensure your template contains the variables you pass (e.g., {{name}}, {{city}}).
import OpenAI from "openai";
import { Prompt, PromptTemplate } from "@future-agi/sdk";
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY! });
// Fetch both label variants
const [tplA, tplB] = await Promise.all([
Prompt.getTemplateByName("my-template-name", { label: "prod-a" }),
Prompt.getTemplateByName("my-template-name", { label: "prod-b" }),
]);
// Randomly select a variant
const selected = Math.random() < 0.5 ? tplA : tplB;
const client = new Prompt(selected as PromptTemplate);
// Compile variables into the template messages
const compiled = client.compile({ name: "Ada", city: "Berlin" });
// Send to OpenAI gpt-4o
const completion = await openai.chat.completions.create({
model: "gpt-4o",
messages: compiled as any,
});
const resultText = completion.choices[0]?.message?.content;import os
import random
from openai import OpenAI
from fi.prompt import Prompt
openai_client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
# Fetch both label variants (each returns a Prompt instance)
client_a = Prompt.get_template_by_name("my-template-name", label="prod-a")
client_b = Prompt.get_template_by_name("my-template-name", label="prod-b")
# Randomly select a variant
selected_client = client_a if random.random() < 0.5 else client_b
# Compile variables into the template messages
compiled = selected_client.compile(name="Ada", city="Berlin")
# Send to OpenAI gpt-4o
response = openai_client.chat.completions.create(
model="gpt-4o",
messages=compiled,
)
result_text = response.choices[0].message.content
# For analytics, log selected_client.template.version or the label (e.g. "prod-a" / "prod-b") Note
For analytics, attach the selected label/version to your logs or tracing so A/B results can be compared.
Compile output format
The compile() method returns messages in a provider-agnostic format. Each message has role and content; content may be a string or a structured list of parts (e.g. text, images) depending on the SDK and template.
Example output structure:
[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello Ada from Berlin!"}
]
Note
If your SDK or backend returns content as a stringified list of content parts (e.g. for multimodal content), you may need an adapter to convert to your target LLM provider’s format (e.g. OpenAI’s role + content string).