Browse every available LLM — local Ollama models and cloud providers — and set the default for ɳClaw and the ai plugin.
# List all available models (local + cloud)
nself model list
# Get details on a specific model
nself model info llama3.2
# Set the default model for the ai plugin and ɳClaw
nself model set llama3.2
# Interactively pick a model from a menu
nself model swapnself model <SUBCOMMAND> [MODEL] [FLAGS]The ai plugin supports two model sources: Ollama (running locally alongside your stack) and cloud providers (OpenAI, Anthropic, Google, Groq, and others). nself model gives you a single interface to browse what is available, read model metadata, and choose the default that ɳClaw and your custom AI features use at runtime.
The active model is stored in NSELF_AI_MODEL in your .env file. Changing it via nself model set takes effect immediately for new requests — no rebuild required.
Cloud models require an API key in .env.secrets (e.g. OPENAI_API_KEY,ANTHROPIC_API_KEY). Local Ollama models require the Ollama service to be running, which ɳSelf manages automatically when ENABLE_OLLAMA=true.
Show all models available to the current stack, grouped by source. Marks the active model with an asterisk.
| Flag | Type | Default | Description |
|---|---|---|---|
--local | bool | false | Show only Ollama local models |
--cloud | bool | false | Show only cloud provider models |
--json | bool | false | Output as JSON |
nself model list
# LOCAL (Ollama)
# llama3.2 3.8 GB 7B params ← active (*)
# llama3.2:70b 40 GB 70B params
# mistral 3.8 GB 7B params
# nomic-embed-text 274 MB embedding
#
# CLOUD
# openai/gpt-4o requires: OPENAI_API_KEY
# openai/gpt-4o-mini requires: OPENAI_API_KEY
# anthropic/claude-sonnet-4-6 requires: ANTHROPIC_API_KEY
# anthropic/claude-haiku-4-5 requires: ANTHROPIC_API_KEY
# google/gemini-1-5-pro requires: GOOGLE_AI_KEY
# groq/llama3-70b-8192 requires: GROQ_API_KEYPrint detailed metadata for a model: parameter count, context window, supported capabilities, disk size (local), and pricing (cloud).
nself model info llama3.2
# Name: llama3.2
# Source: Ollama (local)
# Parameters: 7 billion
# Context: 128,000 tokens
# Capabilities: chat, code, tool-use
# Disk: 3.8 GB
# Quantization: Q4_K_M
# License: Meta Llama 3.2
nself model info anthropic/claude-sonnet-4-6
# Name: claude-sonnet-4-6
# Provider: Anthropic
# Context: 200,000 tokens
# Capabilities: chat, code, tool-use, vision
# Input price: $3.00 / 1M tokens
# Output price: $15.00 / 1M tokens
# Key required: ANTHROPIC_API_KEYSet the default model. Writes NSELF_AI_MODEL=<model> to your active env file. The change takes effect immediately for new AI requests from ɳClaw and the ai plugin — no restart needed.
| Flag | Type | Default | Description |
|---|---|---|---|
--env | string | current | Set the model for a specific environment: dev, staging, prod |
nself model set llama3.2
# ✓ NSELF_AI_MODEL=llama3.2 written to .env.dev
nself model set anthropic/claude-sonnet-4-6
# ✓ NSELF_AI_MODEL=anthropic/claude-sonnet-4-6 written to .env.dev
# ✓ ANTHROPIC_API_KEY detected in .env.secretsOpen an interactive menu to pick a model with arrow keys. Useful when you want to compare options before committing.
nself model swap
# ? Select the default AI model:
# ❯ llama3.2 (local, 3.8 GB)
# llama3.2:70b (local, 40 GB)
# mistral (local, 3.8 GB)
# ─────────────────
# openai/gpt-4o (cloud, $5.00/1M in)
# anthropic/... (cloud, $3.00/1M in)To use a model that is not yet downloaded, pull it with Ollama first, then switch to it:
# Pull a new model (Ollama must be running)
docker exec $(nself ps --name ollama) ollama pull phi3
# Verify it appears
nself model list --local
# Set it as default
nself model set phi3| Provider prefix | Env var required |
|---|---|
openai/ | OPENAI_API_KEY |
anthropic/ | ANTHROPIC_API_KEY |
google/ | GOOGLE_AI_KEY |
groq/ | GROQ_API_KEY |
mistral/ | MISTRAL_API_KEY |
Place API keys in .env.secrets (gitignored). Never put them in .env.dev or .env.prod.
NSELF_AI_MODEL — the active default model (set by nself model set)ENABLE_OLLAMA — set to true to start the Ollama service in the stackOLLAMA_HOST — Ollama API endpoint (default: http://ollama:11434)0 — success1 — model not found or API key missing2 — invalid arguments3 — ai plugin not installed (required for cloud models)ai plugin