Tutorial
OpenClaw supports multiple LLM providers, giving you flexibility in cost, speed, and capability. This guide helps you choose the right model for your needs.
Model Comparison
| Model | Best For | Cost | Speed |
|---|---|---|---|
| Claude 3.5 Sonnet | Coding, complex reasoning | $3/1M input | Fast |
| Claude 3.5 Haiku | Quick tasks, simple queries | $0.25/1M input | Very Fast |
| GPT-4o | Vision, general purpose | $2.50/1M input | Fast |
| GPT-4o-mini | High volume, simple tasks | $0.15/1M input | Very Fast |
| Llama 3 (Local) | Privacy, offline use | Free | Varies |
When to Use Each Model
Claude 3.5 Sonnet
Use for complex tasks requiring deep reasoning:
- Code review and debugging
- Technical writing and documentation
- Complex problem solving
- Long-form content creation
Claude 3.5 Haiku
Perfect for everyday tasks where speed matters:
- Quick questions and clarifications
- Simple code suggestions
- Summarization
- High-volume chat
Cost Tip: Haiku costs ~12x less than Sonnet. Use Haiku for simple tasks and only upgrade to Sonnet when needed.
GPT-4o
Best for multimodal tasks:
- Image analysis and description
- Vision-based queries
- General purpose assistance
Local Models (Ollama)
Use when privacy is critical:
- Sensitive data processing
- Offline environments
- Zero cost requirement
Configuring Your Model
# Set default model
openclaw config set model claude-3-5-sonnet-20241022
# Set via config file
# ~/.config/openclaw/config.yaml
llm:
provider: "anthropic"
model: "claude-3-5-haiku-20241022"
apiKey: "${ANTHROPIC_API_KEY}"
Per-Agent Model Selection
Different agents can use different models:
# config.yaml
agents:
- id: "quick-helper"
model: "claude-3-5-haiku-20241022"
temperature: 0.5
- id: "expert-coder"
model: "claude-3-5-sonnet-20241022"
temperature: 0.1
Switching Models
# Switch for one session
openclaw chat --model claude-3-5-haiku-20241022
# Or use shorthand
openclaw chat --model haiku
Cost Optimization Tips
- Start with Haiku/mini for simple queries
- Use context limits to avoid unnecessary processing
- Cache frequently used information
- Consider local models for privacy-sensitive tasks
Related Articles
References
- OpenClaw Official Documentation - https://docs.openclaw.ai/ - Accessed February 2026
- OpenClaw GitHub Repository - https://github.com/openclaw/openclaw - Accessed February 2026
- Anthropic Claude Models - Claude Model Overview - Accessed February 2026
- OpenAI Models Documentation - Model Reference - Accessed February 2026
- Google Gemini API - Gemini Models - Accessed February 2026
Optimize your setup
Choose the right model for the job and save money without sacrificing quality.
Learn MoreReference Trail
External sources surfaced from the underlying article content
- https://docs.openclaw.ai/docs.openclaw.ai
- https://github.com/openclaw/openclawgithub.com
- Claude Model Overviewdocs.anthropic.com
- Model Referenceplatform.openai.com
- Gemini Modelsai.google.dev