ɳClaw exposes an OpenAI-compatible API gateway. Any application that already integrates with OpenAI can point to your server URL and a gateway key — no SDK changes required.
The gateway routes each request through the same multi-tier system used by the ɳClaw chat interface: local Ollama first, then free Gemini accounts, then a paid API key when needed. You pay only for what falls through to paid providers.
The gateway is mounted at /v1 on your ɳClaw server:
https://your-server.example.com/v1In a default local setup the gateway is at http://localhost:3713/v1.
from openai import OpenAI
client = OpenAI(
base_url="https://your-server.example.com/v1",
api_key="sk-nself-your-key-here",
)
response = client.chat.completions.create(
model="auto",
messages=[{"role": "user", "content": "Explain nSelf plugins"}],
)
print(response.choices[0].message.content)import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://your-server.example.com/v1",
apiKey: "sk-nself-your-key-here",
});
const response = await client.chat.completions.create({
model: "auto",
messages: [{ role: "user", content: "Explain nSelf plugins" }],
});
console.log(response.choices[0].message.content);curl https://your-server.example.com/v1/chat/completions \
-H "Authorization: Bearer sk-nself-your-key-here" \
-H "Content-Type: application/json" \
-d '{
"model": "auto",
"messages": [{"role": "user", "content": "Explain nSelf plugins"}]
}'Add "stream": true to any request for SSE streaming. The response format matches the standard OpenAI chunk format.
curl https://your-server.example.com/v1/chat/completions \
-H "Authorization: Bearer sk-nself-your-key-here" \
-H "Content-Type: application/json" \
-d '{"model":"auto","stream":true,"messages":[{"role":"user","content":"Hello"}]}'Use GET /v1/models to list models currently available on your server. The following model names are always accepted:
| Model name | Notes |
|---|---|
auto | Automatic routing — best available model given current tier capacity |
local | Force local Ollama model — no API cost |
gpt-4o | OpenAI GPT-4o (requires OPENAI_API_KEY on server) |
gemini-1.5-flash | Google Gemini Flash (free tier available via Gemini accounts) |
claude-3-5-sonnet | Anthropic Claude (requires ANTHROPIC_API_KEY on server) |
The standard OpenAI embeddings endpoint is also available. ɳClaw uses Ollama nomic-embed-text locally and falls back totext-embedding-3-small if an OpenAI key is configured.
curl https://your-server.example.com/v1/embeddings \
-H "Authorization: Bearer sk-nself-your-key-here" \
-H "Content-Type: application/json" \
-d '{"model":"nomic-embed-text","input":"nSelf plugin system"}'These optional headers activate ɳClaw-specific features:
| Header | Value | Effect |
|---|---|---|
x-nself-app | true | Inject nSelf expert system prompt |
x-nself-admin | true | Enable admin mode (key must have admin_allowed: true) |
x-nself-session-id | <uuid> | Continue an existing chat session |
x-nself-knowledge | true | Inject relevant nSelf knowledge base results into context |
Gateway API keys are separate from your nSelf internal credentials. Create and manage them with the ɳClaw CLI or the web UI.
# List keys
nself claw api keys list
# Create a key (60 req/min by default)
nself claw api keys create --name myapp
# Create an admin-capable key with a higher rate limit
nself claw api keys create --name admin-tool --admin --rpm 120
# Revoke a key
nself claw api keys revoke <id>Each key has a per-minute rate limit (rpm_limit). Requests over the limit receive a 429 Too Many Requests response with aRetry-After header.
Every request through the gateway is logged with the key ID, model, token counts, and timestamp. View usage with:
# All keys
nself claw api usage
# One key
nself claw api usage --key <id>
# JSON output for scripting
nself claw api usage --jsonTo expose the gateway publicly, add a server block to yournginx/conf.d/ directory. The ɳClaw plugin binds internally on port 3713.
# nginx/conf.d/claw-gateway.conf
server {
listen 443 ssl;
server_name api.yourdomain.com;
# SSL managed by nself build
ssl_certificate /etc/ssl/certs/fullchain.pem;
ssl_certificate_key /etc/ssl/certs/privkey.pem;
location /v1/ {
proxy_pass http://127.0.0.1:3713/v1/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_http_version 1.1;
# Required for SSE streaming
proxy_buffering off;
proxy_cache off;
chunked_transfer_encoding on;
}
}After adding the file, run nself restart nginx to apply the change. SSL certificates are provisioned automatically by nself build when the domain is listed in your .env.
# Check that the endpoint is reachable
nself claw api test
# Test with a specific key
nself claw api test --key sk-nself-your-key
# Show available models
nself claw api test --verbose