Tutorial
Performance Matters: A well-tuned OpenClaw instance responds faster, uses fewer resources, and costs less in API fees.
Performance Factors
OpenClaw's performance depends on:
- Model response time: Faster models = quicker responses
- Context size: Larger context = more processing time
- Memory usage: Active sessions consume RAM
- Network latency: Distance to API servers
Model Selection
Choose faster models for quick interactions:
openclaw config set model.default="claude-3-5-haiku"
openclaw config set model.thinking="medium" # Balance speed and qualityContext Optimization
Limit Active Messages
openclaw config set context.max_messages=50Enable Context Pruning
openclaw config set context.auto_prune=true
openclaw config set context.prune_threshold=100000Resource Limits
openclaw config set resources.max_memory=2048
openclaw config set resources.max_cpu=80Monitoring Performance
openclaw status --verbose