ɳSelfɳSELFDOCS
  • Getting Started

    • Introduction
    • Quick Start
    • Installation
    • Your First Project
  • Core Concepts

    • Architecture Overview
    • Project Structure
    • Configuration
    • Environments
  • Services

    • PostgreSQL Database
    • Hasura GraphQL
    • Authentication
    • Real-Time Communication
    • Storage (MinIO)
    • Email Configuration
    • Redis Cache
    • Search Engines
    • Functions
    • MLflow (ML Tracking)
    • Monitoring & Metrics
    • Admin UI
    • Dashboard
  • Database Tools

    • Schema Management
    • Migrations
    • Seeding Data
    • Backup & Restore
    • dbdiagram.io Sync
  • Microservices

    • NestJS Services
    • BullMQ Workers
    • Go Services
    • Python Services
  • CLI Reference

    • Complete Command Reference
    • Core Commands
    • Database Commands
    • Service Management
    • Production Commands
  • Deployment

    • Local Development
    • Production Setup
    • SSL/TLS Configuration
    • Domain Configuration
    • Environment Variables
  • Advanced Topics

    • Multi-Tenancy & SaaS
    • Security & Hardening
    • Custom Actions
    • Webhooks
    • Performance Tuning
    • Troubleshooting
  • ɳClaw

    • Backend Manager
    • API Gateway
    • Voice Input
    • Threads & Projects
  • Migration Guides

    • From Supabase
    • From Nhost
    • From Firebase
  • Resources

    • Changelog
    • Licensing
    • FAQ
    • Contributing
    • Support

ɳClaw API Gateway

ɳClaw exposes an OpenAI-compatible API gateway. Any application that already integrates with OpenAI can point to your server URL and a gateway key — no SDK changes required.

The gateway routes each request through the same multi-tier system used by the ɳClaw chat interface: local Ollama first, then free Gemini accounts, then a paid API key when needed. You pay only for what falls through to paid providers.

Base URL

The gateway is mounted at /v1 on your ɳClaw server:

https://your-server.example.com/v1

In a default local setup the gateway is at http://localhost:3713/v1.

Quick start

Python

from openai import OpenAI

client = OpenAI(
    base_url="https://your-server.example.com/v1",
    api_key="sk-nself-your-key-here",
)

response = client.chat.completions.create(
    model="auto",
    messages=[{"role": "user", "content": "Explain nSelf plugins"}],
)
print(response.choices[0].message.content)

JavaScript / TypeScript

import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://your-server.example.com/v1",
  apiKey: "sk-nself-your-key-here",
});

const response = await client.chat.completions.create({
  model: "auto",
  messages: [{ role: "user", content: "Explain nSelf plugins" }],
});
console.log(response.choices[0].message.content);

cURL

curl https://your-server.example.com/v1/chat/completions \
  -H "Authorization: Bearer sk-nself-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "auto",
    "messages": [{"role": "user", "content": "Explain nSelf plugins"}]
  }'

Streaming

Add "stream": true to any request for SSE streaming. The response format matches the standard OpenAI chunk format.

curl https://your-server.example.com/v1/chat/completions \
  -H "Authorization: Bearer sk-nself-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"model":"auto","stream":true,"messages":[{"role":"user","content":"Hello"}]}'

Available models

Use GET /v1/models to list models currently available on your server. The following model names are always accepted:

Model nameNotes
autoAutomatic routing — best available model given current tier capacity
localForce local Ollama model — no API cost
gpt-4oOpenAI GPT-4o (requires OPENAI_API_KEY on server)
gemini-1.5-flashGoogle Gemini Flash (free tier available via Gemini accounts)
claude-3-5-sonnetAnthropic Claude (requires ANTHROPIC_API_KEY on server)

Embeddings

The standard OpenAI embeddings endpoint is also available. ɳClaw uses Ollama nomic-embed-text locally and falls back totext-embedding-3-small if an OpenAI key is configured.

curl https://your-server.example.com/v1/embeddings \
  -H "Authorization: Bearer sk-nself-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"model":"nomic-embed-text","input":"nSelf plugin system"}'

Custom headers

These optional headers activate ɳClaw-specific features:

HeaderValueEffect
x-nself-apptrueInject nSelf expert system prompt
x-nself-admintrueEnable admin mode (key must have admin_allowed: true)
x-nself-session-id<uuid>Continue an existing chat session
x-nself-knowledgetrueInject relevant nSelf knowledge base results into context

API keys

Gateway API keys are separate from your nSelf internal credentials. Create and manage them with the ɳClaw CLI or the web UI.

# List keys
nself claw api keys list

# Create a key (60 req/min by default)
nself claw api keys create --name myapp

# Create an admin-capable key with a higher rate limit
nself claw api keys create --name admin-tool --admin --rpm 120

# Revoke a key
nself claw api keys revoke <id>

Each key has a per-minute rate limit (rpm_limit). Requests over the limit receive a 429 Too Many Requests response with aRetry-After header.

Usage tracking

Every request through the gateway is logged with the key ID, model, token counts, and timestamp. View usage with:

# All keys
nself claw api usage

# One key
nself claw api usage --key <id>

# JSON output for scripting
nself claw api usage --json

Nginx configuration

To expose the gateway publicly, add a server block to yournginx/conf.d/ directory. The ɳClaw plugin binds internally on port 3713.

# nginx/conf.d/claw-gateway.conf
server {
    listen 443 ssl;
    server_name api.yourdomain.com;

    # SSL managed by nself build
    ssl_certificate     /etc/ssl/certs/fullchain.pem;
    ssl_certificate_key /etc/ssl/certs/privkey.pem;

    location /v1/ {
        proxy_pass http://127.0.0.1:3713/v1/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_http_version 1.1;
        # Required for SSE streaming
        proxy_buffering off;
        proxy_cache off;
        chunked_transfer_encoding on;
    }
}

After adding the file, run nself restart nginx to apply the change. SSL certificates are provisioned automatically by nself build when the domain is listed in your .env.

Test the gateway

# Check that the endpoint is reachable
nself claw api test

# Test with a specific key
nself claw api test --key sk-nself-your-key

# Show available models
nself claw api test --verbose