Providers

Providers

Calliope CLI supports 12+ AI providers. Switch between them instantly without changing your workflow.

Supported Providers

Cloud Providers

ProviderCommandDefault ModelAPI Key Variable
Anthropic/provider anthropicclaude-sonnet-4-20250514ANTHROPIC_API_KEY
OpenAI/provider openaigpt-4oOPENAI_API_KEY
Google/provider googlegemini-2.0-flashGOOGLE_API_KEY
Mistral/provider mistralmistral-large-latestMISTRAL_API_KEY
Groq/provider groqllama-3.3-70b-versatileGROQ_API_KEY
Together/provider togethermeta-llama/Llama-3.3-70B-Instruct-TurboTOGETHER_API_KEY
Fireworks/provider fireworksaccounts/fireworks/models/llama-v3p3-70b-instructFIREWORKS_API_KEY
AI21/provider ai21jamba-1.5-largeAI21_API_KEY
HuggingFace/provider huggingfacemeta-llama/Llama-3.3-70B-InstructHUGGINGFACE_API_KEY
DeepSeek/provider deepseekdeepseek-chatDEEPSEEK_API_KEY

Gateway Providers

ProviderCommandDescriptionAPI Key Variable
OpenRouter/provider openrouterAccess any model via unified APIOPENROUTER_API_KEY
LiteLLM/provider litellmSelf-hosted proxy to multiple providersLITELLM_BASE_URL, LITELLM_API_KEY

Local Providers

ProviderCommandDescriptionConfiguration
Ollama/provider ollamaRun models locallyOLLAMA_BASE_URL (default: localhost:11434)

Switching Providers

During a Session

Switch providers at any time:

calliope> /provider google
Provider set to: google

calliope> /provider anthropic
Provider set to: anthropic

View Available Providers

See which providers are configured:

calliope> /provider
Current: anthropic
Available: anthropic, google, openai, mistral

Only providers with valid API keys are shown.

Auto Selection

Set provider to auto to let Calliope choose the best available:

calliope> /provider auto

Priority order: Anthropic > OpenAI > Google > Mistral > OpenRouter > Together > Groq > Ollama > LiteLLM

Configuring Models

Set a Specific Model

Override the default model for your current provider:

calliope> /model claude-opus-4-20250514
Model set to: claude-opus-4-20250514

View Current Model

calliope> /model
Model: claude-sonnet-4-20250514

Model Availability by Provider

Model names vary by provider. Consult each provider’s documentation for available models.

Provider Details

Anthropic (Claude)

Best-in-class for coding tasks and tool use.

export ANTHROPIC_API_KEY=sk-ant-...

Available models:

  • claude-sonnet-4-20250514 (default) - Best balance of speed and capability
  • claude-opus-4-20250514 - Most capable
  • claude-3-5-haiku-20241022 - Fastest

Get an API key: console.anthropic.com

OpenAI (GPT)

Industry standard with excellent general capabilities.

export OPENAI_API_KEY=sk-...

Available models:

  • gpt-4o (default) - Multimodal flagship
  • gpt-4-turbo - Previous generation flagship
  • gpt-4o-mini - Smaller, faster

Get an API key: platform.openai.com

Google (Gemini)

Google’s flagship AI models.

export GOOGLE_API_KEY=...

Available models:

  • gemini-2.0-flash (default) - Fast and capable
  • gemini-1.5-pro - Larger context window
  • gemini-1.5-flash - Balanced

Get an API key: aistudio.google.com

Mistral

European AI company with strong open-weight models.

export MISTRAL_API_KEY=...

Available models:

  • mistral-large-latest (default) - Most capable
  • codestral-latest - Optimized for code
  • mistral-medium-latest - Balanced

Get an API key: console.mistral.ai

Groq

Ultra-fast inference on open models.

export GROQ_API_KEY=...

Available models:

  • llama-3.3-70b-versatile (default) - Llama 3.3
  • mixtral-8x7b-32768 - Mixtral

Get an API key: console.groq.com

Together

Platform for open-source models.

export TOGETHER_API_KEY=...

Available models:

  • meta-llama/Llama-3.3-70B-Instruct-Turbo (default)
  • Many open models available

Get an API key: api.together.xyz

OpenRouter

Unified API to access any model from any provider.

export OPENROUTER_API_KEY=...

Model format: provider/model-name

  • anthropic/claude-sonnet-4 (default)
  • openai/gpt-4o
  • google/gemini-2.0-flash

Get an API key: openrouter.ai

Ollama (Local)

Run models locally on your machine.

# Install Ollama first: https://ollama.ai
export OLLAMA_BASE_URL=http://localhost:11434

Pull a model:

ollama pull llama3.3

Available models: Any model you pull with ollama pull

DeepSeek

Chinese AI company known for strong coding and reasoning capabilities.

export DEEPSEEK_API_KEY=...

Available models:

  • deepseek-chat (default) - General chat model
  • deepseek-coder - Optimized for coding tasks
  • deepseek-reasoner - Enhanced reasoning

Get an API key: platform.deepseek.com

LiteLLM (Proxy)

Self-hosted proxy that provides a unified interface to multiple providers.

export LITELLM_BASE_URL=http://localhost:4000
export LITELLM_API_KEY=...  # Optional

Setup: See LiteLLM documentation

Provider Priority

When using auto provider or when your preferred provider is unavailable:

  1. Anthropic - Preferred for coding tasks
  2. OpenAI - Strong general capabilities
  3. Google - Good multimodal support
  4. Mistral - European alternative
  5. OpenRouter - Access to many models
  6. Together - Open-source models
  7. Groq - Fast inference
  8. Ollama - Local fallback
  9. LiteLLM - Proxy fallback

Tips

Use Different Providers for Different Tasks

# Complex reasoning
calliope> /provider anthropic
calliope> Help me design a microservices architecture

# Quick questions
calliope> /provider groq
calliope> What's the syntax for a Python list comprehension?

# Local/offline work
calliope> /provider ollama
calliope> Review this code

Cost Optimization

  • Use Groq or Together for simple tasks (often cheaper)
  • Use Ollama for development/testing (free)
  • Reserve Anthropic/OpenAI for complex tasks

Fallback Strategy

Configure multiple providers so you have fallbacks:

export ANTHROPIC_API_KEY=...  # Primary
export OPENAI_API_KEY=...     # Backup
export OLLAMA_BASE_URL=...    # Offline fallback