Providers
Calliope CLI supports 12+ AI providers. Switch between them instantly without changing your workflow.
Supported Providers
Cloud Providers
| Provider | Command | Default Model | API Key Variable |
|---|---|---|---|
| Anthropic | /provider anthropic | claude-sonnet-4-20250514 | ANTHROPIC_API_KEY |
| OpenAI | /provider openai | gpt-4o | OPENAI_API_KEY |
/provider google | gemini-2.0-flash | GOOGLE_API_KEY | |
| Mistral | /provider mistral | mistral-large-latest | MISTRAL_API_KEY |
| Groq | /provider groq | llama-3.3-70b-versatile | GROQ_API_KEY |
| Together | /provider together | meta-llama/Llama-3.3-70B-Instruct-Turbo | TOGETHER_API_KEY |
| Fireworks | /provider fireworks | accounts/fireworks/models/llama-v3p3-70b-instruct | FIREWORKS_API_KEY |
| AI21 | /provider ai21 | jamba-1.5-large | AI21_API_KEY |
| HuggingFace | /provider huggingface | meta-llama/Llama-3.3-70B-Instruct | HUGGINGFACE_API_KEY |
| DeepSeek | /provider deepseek | deepseek-chat | DEEPSEEK_API_KEY |
Gateway Providers
| Provider | Command | Description | API Key Variable |
|---|---|---|---|
| OpenRouter | /provider openrouter | Access any model via unified API | OPENROUTER_API_KEY |
| LiteLLM | /provider litellm | Self-hosted proxy to multiple providers | LITELLM_BASE_URL, LITELLM_API_KEY |
Local Providers
| Provider | Command | Description | Configuration |
|---|---|---|---|
| Ollama | /provider ollama | Run models locally | OLLAMA_BASE_URL (default: localhost:11434) |
Switching Providers
During a Session
Switch providers at any time:
calliope> /provider google
Provider set to: google
calliope> /provider anthropic
Provider set to: anthropicView Available Providers
See which providers are configured:
calliope> /provider
Current: anthropic
Available: anthropic, google, openai, mistralOnly providers with valid API keys are shown.
Auto Selection
Set provider to auto to let Calliope choose the best available:
calliope> /provider autoPriority order: Anthropic > OpenAI > Google > Mistral > OpenRouter > Together > Groq > Ollama > LiteLLM
Configuring Models
Set a Specific Model
Override the default model for your current provider:
calliope> /model claude-opus-4-20250514
Model set to: claude-opus-4-20250514View Current Model
calliope> /model
Model: claude-sonnet-4-20250514Model Availability by Provider
Provider Details
Anthropic (Claude)
Best-in-class for coding tasks and tool use.
export ANTHROPIC_API_KEY=sk-ant-...Available models:
claude-sonnet-4-20250514(default) - Best balance of speed and capabilityclaude-opus-4-20250514- Most capableclaude-3-5-haiku-20241022- Fastest
Get an API key: console.anthropic.com
OpenAI (GPT)
Industry standard with excellent general capabilities.
export OPENAI_API_KEY=sk-...Available models:
gpt-4o(default) - Multimodal flagshipgpt-4-turbo- Previous generation flagshipgpt-4o-mini- Smaller, faster
Get an API key: platform.openai.com
Google (Gemini)
Google’s flagship AI models.
export GOOGLE_API_KEY=...Available models:
gemini-2.0-flash(default) - Fast and capablegemini-1.5-pro- Larger context windowgemini-1.5-flash- Balanced
Get an API key: aistudio.google.com
Mistral
European AI company with strong open-weight models.
export MISTRAL_API_KEY=...Available models:
mistral-large-latest(default) - Most capablecodestral-latest- Optimized for codemistral-medium-latest- Balanced
Get an API key: console.mistral.ai
Groq
Ultra-fast inference on open models.
export GROQ_API_KEY=...Available models:
llama-3.3-70b-versatile(default) - Llama 3.3mixtral-8x7b-32768- Mixtral
Get an API key: console.groq.com
Together
Platform for open-source models.
export TOGETHER_API_KEY=...Available models:
meta-llama/Llama-3.3-70B-Instruct-Turbo(default)- Many open models available
Get an API key: api.together.xyz
OpenRouter
Unified API to access any model from any provider.
export OPENROUTER_API_KEY=...Model format: provider/model-name
anthropic/claude-sonnet-4(default)openai/gpt-4ogoogle/gemini-2.0-flash
Get an API key: openrouter.ai
Ollama (Local)
Run models locally on your machine.
# Install Ollama first: https://ollama.ai
export OLLAMA_BASE_URL=http://localhost:11434Pull a model:
ollama pull llama3.3Available models: Any model you pull with ollama pull
llama3.3(default)codellamamistral- Many more at ollama.ai/library
DeepSeek
Chinese AI company known for strong coding and reasoning capabilities.
export DEEPSEEK_API_KEY=...Available models:
deepseek-chat(default) - General chat modeldeepseek-coder- Optimized for coding tasksdeepseek-reasoner- Enhanced reasoning
Get an API key: platform.deepseek.com
LiteLLM (Proxy)
Self-hosted proxy that provides a unified interface to multiple providers.
export LITELLM_BASE_URL=http://localhost:4000
export LITELLM_API_KEY=... # OptionalSetup: See LiteLLM documentation
Provider Priority
When using auto provider or when your preferred provider is unavailable:
- Anthropic - Preferred for coding tasks
- OpenAI - Strong general capabilities
- Google - Good multimodal support
- Mistral - European alternative
- OpenRouter - Access to many models
- Together - Open-source models
- Groq - Fast inference
- Ollama - Local fallback
- LiteLLM - Proxy fallback
Tips
Use Different Providers for Different Tasks
# Complex reasoning
calliope> /provider anthropic
calliope> Help me design a microservices architecture
# Quick questions
calliope> /provider groq
calliope> What's the syntax for a Python list comprehension?
# Local/offline work
calliope> /provider ollama
calliope> Review this codeCost Optimization
- Use Groq or Together for simple tasks (often cheaper)
- Use Ollama for development/testing (free)
- Reserve Anthropic/OpenAI for complex tasks
Fallback Strategy
Configure multiple providers so you have fallbacks:
export ANTHROPIC_API_KEY=... # Primary
export OPENAI_API_KEY=... # Backup
export OLLAMA_BASE_URL=... # Offline fallback