System Configuration
Complete the configuration for the LLM gateway, PeerDB CDC mirrors, and Temporal workers.
About
Configure the moving parts that aren’t covered by .env alone: provider entries in the LLM gateway’s config.yaml, the PeerDB Postgres → ClickHouse replication mirrors, and Temporal worker concurrency.
LLM gateway
Warning
The LLM gateway requires additional configuration before model calls will work. You must create a config.yaml and provide your provider API keys — see the setup steps below.
The gateway is a Go LLM proxy that routes all model calls. It ships with config.example.yaml — OpenAI enabled by default.
Setup
# 1. Copy the example
cp futureagi/agentcc-gateway/config.example.yaml \
futureagi/agentcc-gateway/config.yaml
# 2. Edit config.yaml — uncomment providers, set keys via ${VAR} interpolation
# 3. Set matching keys in .env (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.)
# 4. Point the gateway volume at your config.yaml (in docker-compose.yml)
# volumes:
# - ./futureagi/agentcc-gateway/config.yaml:/app/config.yaml:ro
# 5. Restart
docker compose up -d --force-recreate gateway
config.yaml is gitignored. Treat it as a secret.
Provider config examples
providers:
openai:
api_key: "${OPENAI_API_KEY}"
api_format: "openai"
models: [gpt-4o, gpt-4o-mini]
anthropic:
api_key: "${ANTHROPIC_API_KEY}"
api_format: "anthropic"
models: [claude-opus-4-5, claude-sonnet-4-5]
gemini:
api_key: "${GOOGLE_API_KEY}"
api_format: "gemini"
models: [gemini-2.0-flash, gemini-1.5-pro] providers:
bedrock:
api_key: "${AWS_SECRET_ACCESS_KEY}"
api_format: "bedrock"
region: "${AWS_REGION}"
access_key: "${AWS_ACCESS_KEY_ID}"
models: [anthropic.claude-3-5-sonnet-20241022-v2:0] providers:
vertex:
base_url: "https://us-central1-aiplatform.googleapis.com"
api_key: "${GOOGLE_ACCESS_TOKEN}"
api_format: "gemini"
headers:
x-gcp-project: "${GCP_PROJECT_ID}"
x-gcp-location: "us-central1"
models: [gemini-2.0-flash-001]Vertex uses a Bearer token, not an API key. Rotate GOOGLE_ACCESS_TOKEN via a sidecar calling gcloud auth print-access-token.
For routing rules, rate limits, caching, and the full config reference — see Agent Command Center → Self-hosted.
PeerDB (Postgres → ClickHouse CDC)
PeerDB continuously replicates Postgres tables into ClickHouse so trace and eval analytics stay fast.
First-boot timing issue: peerdb-init runs immediately on startup, before Django migrations may have completed. If mirrors show “not started” in the PeerDB UI:
# 1. Wait until backend logs "Application startup complete"
docker compose logs -f backend
# 2. Re-run init
docker compose run --rm peerdb-init bash /setup.sh
Verify at http://localhost:3001 — mirrors should show running within seconds.
After upgrades that touch replicated tables, re-run the same init command.
Temporal workers
Default (all-queue) — one worker polls all task queues. Controlled by TEMPORAL_ALL_QUEUES=true in .env. Good for self-hosted deployments.
Per-queue workers (dev mode) — six dedicated workers via the dev overlay:
| Service name | Queue | Typical concurrency |
|---|---|---|
worker-default | default | 100 |
worker-tasks-s | tasks_s | 200 |
worker-tasks-l | tasks_l | 50 |
worker-tasks-xl | tasks_xl | 10 |
worker-trace-ingestion | trace_ingestion | 100 |
worker-agent-compass | agent_compass | 50 |
Tune concurrency in .env via TEMPORAL_MAX_CONCURRENT_ACTIVITIES and TEMPORAL_MAX_CONCURRENT_WORKFLOW_TASKS.
Temporal UI (dev mode): http://localhost:8085