Understanding Agent Playground
Learn the core building blocks of Agent Playground: graphs, nodes, ports, edges, and node templates.
About
Agent Playground is built around a small set of core building blocks. Understanding these helps you design and debug workflows effectively. This page explains how graphs, nodes, ports, edges, and templates fit together.
Graphs
A graph is the top-level container for your AI workflow. It is a series of connected steps where data flows from inputs through each node to outputs.
Each graph has:
- Name and description for identification
- Collaborators who can view and edit the graph
- One or more versions (snapshots of the workflow at different points in time)
Nodes
Nodes are the building blocks of your workflow. Each node represents a single step that takes inputs, performs an operation, and produces outputs.
LLM Prompt Nodes
LLM Prompt nodes execute a prompt against a language model. They connect directly to the Prompt Management system:
- Prompt template defines the prompt text with
{{variable}}placeholders - Model specifies which LLM to call (GPT-4, Claude, etc.)
- Parameters control generation behavior (temperature, max tokens, top-p)
- Response format determines output structure (plain text or JSON)
When the linked prompt template is updated, the node’s input ports automatically sync to match the new variables.
Agent (Subgraph) Nodes
Agent nodes embed an entire other graph as a single step in your workflow. This enables:
- Modularity: break complex workflows into reusable sub-workflows
- Composition: combine multiple agents into a larger pipeline
- Encapsulation: the parent graph only sees the subgraph’s exposed input and output ports
Note
Subgraph nodes can only reference non-draft versions of other graphs, never drafts. Circular references (Graph A embeds Graph B which embeds Graph A) are detected and blocked.
Ports
Ports are typed connection points on every node. They define the data contract: what a node expects as input and what it produces as output.
Each port has:
- Direction: input or output
- Key: a unique identifier (e.g.,
prompt,response,output) - Display name: a human-readable label
- Data schema: a JSON Schema definition that validates data at runtime
Exposed Ports
When an input port has no incoming edge, it becomes an exposed port: an entry point for the graph. Similarly, output ports with no outgoing edges are exposed as graph outputs. Exposed input ports automatically become columns in the graph’s dataset for execution.
Edges
Edges are the connections that carry data between nodes. Each edge links one node’s output port to another node’s input port.
Rules:
- Fan-out is allowed: one output port can connect to multiple input ports (data is broadcast to all targets)
- Fan-in is blocked: each input port accepts only one incoming edge
- No cycles: the graph cannot loop back on itself. The platform detects and prevents cycles at connection time
- Type validation: the platform checks that connected ports have compatible data schemas
Node Templates
Node templates are the registry of available node types. They define the default configuration for each type of node, including:
- Port definitions: what inputs and outputs the node type has
- Port mode: strict, extensible, or dynamic
- Config schema: JSON Schema for the node’s configuration (model parameters, settings, etc.)
The platform ships with built-in templates (LLM Prompt, Agent) and supports custom templates for specialized use cases. Templates are seeded system-wide and available to all users.
Tip
When you drag a node from the selection panel onto the canvas, the platform creates a new node instance from the matching template and auto-generates its ports based on the template’s port definitions.
Next Steps
- Versions & Execution: How the version lifecycle and execution model work
- Create a Graph: Create your first workflow
- Build a Workflow: Add nodes, configure them, and connect them