Supported providers
All LLM providers Agent Command Center supports, how to add them, and how to switch providers at request time.
About
Agent Command Center supports 20+ cloud and self-hosted LLM providers through a unified OpenAI-compatible API. Add a provider once with its API key, then switch between providers by changing the model name in your request.
Cloud providers
| Provider | Type | api_format | Auth | Notes |
|---|---|---|---|---|
| OpenAI | openai | openai | API key | Native format |
| Anthropic | anthropic | anthropic | API key | Auto-translated to OpenAI format |
| Google Gemini | gemini | gemini | API key | Auto-translated to OpenAI format |
| Google Vertex AI | vertexai | gemini | Bearer token | Uses GCP project/location headers |
| AWS Bedrock | bedrock | bedrock | SigV4 | Requires AWS region, cross-region failover supported |
| Azure OpenAI | azure | azure | API key | Requires api_version, supports Azure AD bearer auth |
| Cohere | cohere | cohere | API key | Auto-translated to OpenAI format |
| Groq | groq | openai | API key | OpenAI-compatible |
| Mistral AI | mistral | openai | API key | OpenAI-compatible |
| Together AI | together | openai | API key | OpenAI-compatible |
| Fireworks AI | fireworks | openai | API key | OpenAI-compatible |
| DeepInfra | deepinfra | openai | API key | OpenAI-compatible |
| Perplexity | perplexity | openai | API key | OpenAI-compatible |
| Cerebras | cerebras | openai | API key | OpenAI-compatible |
| xAI (Grok) | xai | openai | API key | OpenAI-compatible |
| OpenRouter | openrouter | openai | API key | OpenAI-compatible |
| Hugging Face | huggingface | openai | API key | Inference API |
| Anyscale | anyscale | openai | API key | OpenAI-compatible |
| Replicate | replicate | openai | API key | OpenAI-compatible |
Providers marked “OpenAI-compatible” use the same wire format as OpenAI. No translation needed. Providers with native formats (Anthropic, Gemini, Bedrock, Cohere) are automatically translated by Agent Command Center - your code stays identical regardless of which provider handles the request.
Tip
Agent Command Center supports all models from each provider, including new releases. Use any model name your provider supports.
Self-hosted providers
| Provider | Type | Notes |
|---|---|---|
| Ollama | ollama | Auto-discovers models from /v1/models |
| vLLM | vllm | Auto-discovers models from /v1/models |
| LM Studio | lmstudio | OpenAI-compatible |
| HuggingFace TGI | tgi | OpenAI-compatible |
| LocalAI | localai | OpenAI-compatible |
| Any OpenAI-compatible server | - | Works with any server implementing the OpenAI API |
Note
Your self-hosted endpoint must be reachable from the Agent Command Center. Use a tunnel (ngrok, Cloudflare Tunnel), a cloud VM with a public IP, or deploy behind a reverse proxy.
Adding a provider
- Go to Agent Command Center > Providers in the Future AGI dashboard
- Click Add Provider
- Select the provider from the list
- Enter your API key and any required settings
- Click Save
from agentcc import AgentCC
client = AgentCC(
api_key="sk-agentcc-your-key",
base_url="https://gateway.futureagi.com",
control_plane_url="https://api.futureagi.com",
)
client.org_configs.create(
org_id="your-org-id",
config={
"providers": {
"openai": {
"api_key": "sk-your-openai-key",
"api_format": "openai",
"models": ["gpt-4o", "gpt-4o-mini"],
},
"anthropic": {
"api_key": "sk-ant-your-key",
"api_format": "anthropic",
},
}
}
) import { AgentCC } from "@futureagi/agentcc";
const client = new AgentCC({
apiKey: "sk-agentcc-your-key",
baseUrl: "https://gateway.futureagi.com",
controlPlaneUrl: "https://api.futureagi.com",
});
await client.orgConfigs.create({
orgId: "your-org-id",
config: {
providers: {
openai: {
api_key: "sk-your-openai-key",
api_format: "openai",
models: ["gpt-4o", "gpt-4o-mini"],
},
anthropic: {
api_key: "sk-ant-your-key",
api_format: "anthropic",
},
},
},
}); Warning
Provider API keys are stored encrypted and never exposed in API responses.
Switching providers at request time
Change the model name to route to a different provider. Same code, same API, different LLM.
from agentcc import AgentCC
client = AgentCC(
api_key="sk-agentcc-your-key",
base_url="https://gateway.futureagi.com",
)
# OpenAI
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}]
)
# Anthropic - same code, different model
response = client.chat.completions.create(
model="claude-sonnet-4-6",
messages=[{"role": "user", "content": "Hello"}]
)
# Google Gemini
response = client.chat.completions.create(
model="gemini-2.0-flash",
messages=[{"role": "user", "content": "Hello"}]
) from openai import OpenAI
# Works with the OpenAI SDK - just swap base_url and api_key
client = OpenAI(
base_url="https://gateway.futureagi.com/v1",
api_key="sk-agentcc-your-key",
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}]
) import litellm
response = litellm.completion(
model="openai/gpt-4o",
messages=[{"role": "user", "content": "Hello"}],
api_key="sk-agentcc-your-key",
base_url="https://gateway.futureagi.com/v1",
) curl -X POST https://gateway.futureagi.com/v1/chat/completions \
-H "Authorization: Bearer sk-agentcc-your-key" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello"}]
}' Self-hosted setup
Connect models running on your own infrastructure.
- Go to Agent Command Center > Providers
- Click Add Provider
- Enter your model’s public endpoint URL
- Enter the model name
- Click Save
from agentcc import AgentCC
client = AgentCC(
api_key="sk-agentcc-your-key",
base_url="https://gateway.futureagi.com",
control_plane_url="https://api.futureagi.com",
)
client.org_configs.create(
org_id="your-org-id",
config={
"providers": {
"ollama": {
"base_url": "https://your-ollama.example.com",
"api_format": "openai",
"type": "ollama",
# models auto-discovered from /v1/models
},
"vllm": {
"base_url": "https://your-vllm.example.com",
"api_format": "openai",
"type": "vllm",
"models": ["meta-llama/Llama-3.1-8B-Instruct"],
},
}
}
) import { AgentCC } from "@futureagi/agentcc";
const client = new AgentCC({
apiKey: "sk-agentcc-your-key",
baseUrl: "https://gateway.futureagi.com",
controlPlaneUrl: "https://api.futureagi.com",
});
await client.orgConfigs.create({
orgId: "your-org-id",
config: {
providers: {
ollama: {
base_url: "https://your-ollama.example.com",
api_format: "openai",
type: "ollama",
},
vllm: {
base_url: "https://your-vllm.example.com",
api_format: "openai",
type: "vllm",
models: ["meta-llama/Llama-3.1-8B-Instruct"],
},
},
},
}); Provider health
Agent Command Center monitors provider health automatically. It tracks response times, error rates, and availability. When a provider becomes unhealthy:
- The circuit breaker opens to stop sending requests to the failing provider
- Traffic fails over to healthy alternatives
- After a cooldown period, Agent Command Center sends probe requests to check recovery
- Once the provider responds successfully, it’s added back to the rotation
See Failover & circuit breaking for configuration details.