ɳSelfɳSELFDOCS
  • Getting Started

    • Introduction
    • Quick Start
    • Installation
    • Your First Project
  • Core Concepts

    • Architecture Overview
    • Project Structure
    • Configuration
    • Environments
  • Services

    • PostgreSQL Database
    • Hasura GraphQL
    • Authentication
    • Real-Time Communication
    • Storage (MinIO)
    • Email Configuration
    • Redis Cache
    • Search Engines
    • Functions
    • MLflow (ML Tracking)
    • Monitoring & Metrics
    • Admin UI
    • Dashboard
  • Database Tools

    • Schema Management
    • Migrations
    • Seeding Data
    • Backup & Restore
    • dbdiagram.io Sync
  • Microservices

    • NestJS Services
    • BullMQ Workers
    • Go Services
    • Python Services
  • CLI Reference

    • Complete Command Reference
    • Core Commands
    • Database Commands
    • Service Management
    • Production Commands
  • Deployment

    • Local Development
    • Production Setup
    • SSL/TLS Configuration
    • Domain Configuration
    • Environment Variables
  • Advanced Topics

    • Multi-Tenancy & SaaS
    • Security & Hardening
    • Custom Actions
    • Webhooks
    • Performance Tuning
    • Troubleshooting
  • Migration Guides

    • From Supabase
    • From Nhost
    • From Firebase
  • Resources

    • Changelog
    • Licensing
    • FAQ
    • Contributing
    • Support

Migrating from Supabase

Complete guide for migrating from Supabase to ɳSelf

Migration Overview

  • Difficulty: Medium-High
  • Estimated Time: 8-16 hours
  • Compatibility: 85% (same PostgreSQL core, different API layer)

Why Migrate to ɳSelf?

Supabase and ɳSelf both use PostgreSQL as their foundation, but differ in their approach and architecture:

  • Full infrastructure control - Self-hosted, no vendor lock-in
  • More powerful GraphQL - Hasura vs. pg_graphql extension (schema stitching, remote schemas, actions)
  • Advanced features - Multi-tenancy, billing integration, plugin system
  • Better real-time - Full GraphQL subscriptions vs. Supabase Realtime channels
  • Comprehensive CLI - 56 commands for every operation
  • Cost control - Predictable self-hosting costs

Key Differences

AspectSupabaseɳSelf
API LayerPostgREST (REST-first)Hasura GraphQL (GraphQL-first)
AuthenticationGoTruenHost Auth
StorageSupabase StorageMinIO (S3-compatible)
FunctionsDeno Edge FunctionsNode.js/Deno Functions
RealtimeCustom Realtime serverGraphQL subscriptions

Prerequisites

Before you start, ensure you have:

  • Access to Supabase project (owner/admin)
  • PostgreSQL dump capability (Supabase Dashboard or CLI)
  • List of all RLS policies
  • Storage bucket inventory and policies
  • Edge Functions inventory
  • List of Auth providers configured
  • ɳSelf installed on target server
  • Full backup of Supabase project

Required Tools

# Install nself
curl -sSL https://install.nself.org | bash

# Install Supabase CLI
npm install -g supabase

# Install PostgreSQL client tools
brew install postgresql  # macOS
sudo apt-get install postgresql-client  # Ubuntu

# Install jq for JSON processing
brew install jq  # macOS
sudo apt-get install jq  # Ubuntu

Phase 1: Setup ɳSelf Project

Estimated time: 30 minutes

Initialize Project

mkdir supabase-migration && cd supabase-migration
nself init --wizard

Configure .env to Match Supabase

PROJECT_NAME=my-supabase-migration
ENV=dev
BASE_DOMAIN=localhost

# Database
POSTGRES_DB=myapp_db
POSTGRES_USER=postgres
POSTGRES_PASSWORD=your-secure-password

# Hasura
HASURA_GRAPHQL_ADMIN_SECRET=your-admin-secret
HASURA_GRAPHQL_JWT_SECRET={"type":"HS256","key":"your-jwt-secret-min-32-chars"}

# Auth
AUTH_SERVER_URL=http://auth.localhost
AUTH_CLIENT_URL=http://localhost:3000

# Storage
MINIO_ENABLED=true
MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=minioadmin

# Optional services
REDIS_ENABLED=true
FUNCTIONS_ENABLED=true
MAILPIT_ENABLED=true

Build and Start

nself build
nself start
nself doctor  # Verify all services are running

Phase 2: Database Schema Migration

Estimated time: 2-3 hours

Export from Supabase

# Method 1: Supabase CLI (recommended)
supabase db dump --project-id your-project-id > supabase-dump.sql

# Method 2: pg_dump directly
pg_dump "postgresql://postgres:[password]@db.[project-ref].supabase.co:5432/postgres" > supabase-dump.sql

Clean Supabase-Specific Objects

cat supabase-dump.sql | \
  grep -v "supabase_functions" | \
  grep -v "supabase_migrations" | \
  grep -v "pg_graphql" | \
  sed 's/supabase_admin/postgres/g' > cleaned-dump.sql

Important: Preserve public and auth schemas. Remove supabase_functions and realtime schemas.

Import to ɳSelf

nself db import cleaned-dump.sql

# Verify import
nself db shell
-- In psql shell
\dt public.*  -- List public tables
\dt auth.*    -- List auth tables
SELECT COUNT(*) FROM auth.users;  -- Verify data
\q

Phase 3: Authentication Migration

Estimated time: 1-2 hours

User Data

If you imported the database dump, the auth.users table is already migrated. Verify:

nself db shell
SELECT id, email, created_at FROM auth.users LIMIT 10;
SELECT COUNT(*) FROM auth.users;

Password Reset

Critical: Password hashes may not be compatible. Users must reset passwords for security.

# Send password reset emails to all users
cat > reset-passwords.sh << 'EOF'
#!/bin/bash
AUTH_URL="http://auth.localhost/v1"
EMAILS=$(nself db shell -c "SELECT email FROM auth.users;" | tail -n +3 | head -n -2)

for EMAIL in $EMAILS; do
  echo "Sending reset to: $EMAIL"
  curl -X POST "$AUTH_URL/user/password-reset" \
    -H "Content-Type: application/json" \
    -d "{\"email\": \"$EMAIL\"}"
done
EOF

chmod +x reset-passwords.sh
./reset-passwords.sh

Configure OAuth Providers

# In .env:
AUTH_PROVIDER_GITHUB_ENABLED=true
AUTH_PROVIDER_GITHUB_CLIENT_ID=your-github-client-id
AUTH_PROVIDER_GITHUB_CLIENT_SECRET=your-github-secret

AUTH_PROVIDER_GOOGLE_ENABLED=true
AUTH_PROVIDER_GOOGLE_CLIENT_ID=your-google-client-id
AUTH_PROVIDER_GOOGLE_CLIENT_SECRET=your-google-secret

# Restart auth service
nself restart --service=auth

Update OAuth redirect URIs in provider dashboards:

  • Old: https://[project-ref].supabase.co/auth/v1/callback
  • New: http://auth.localhost/v1/auth/callback (dev) or https://auth.yourdomain.com/v1/auth/callback (prod)

Phase 4: Row Level Security Migration

Estimated time: 2-4 hours

Supabase and ɳSelf both support PostgreSQL RLS, but ɳSelf primarily uses Hasura permissions (recommended).

Option A: Convert to Hasura Permissions (Recommended)

Supabase RLS policy example:

CREATE POLICY "Users can view their own posts"
  ON posts FOR SELECT
  USING (auth.uid() = user_id);

Hasura permission equivalent (in Hasura Console → Data → posts → Permissions):

table: posts
role: user
permissions:
  select:
    filter:
      user_id: { _eq: X-Hasura-User-Id }
    columns: [id, title, content, user_id, created_at]

Advantages: Faster (no RLS overhead), easier to manage (GUI + GraphQL), better for complex permissions.

Option B: Keep PostgreSQL RLS

If you prefer SQL policies, modify auth.uid() to work with ɳSelf:

-- Supabase:
USING (auth.uid() = user_id)

-- ɳSelf:
USING (current_setting('hasura.user.id')::uuid = user_id)

Phase 5: Storage Migration

Estimated time: 2-3 hours

Download from Supabase

#!/bin/bash
SUPABASE_URL="https://[project-ref].supabase.co"
SUPABASE_KEY="your-anon-key"
BUCKET="default"
OUTPUT_DIR="./storage-backup/$BUCKET"

mkdir -p $OUTPUT_DIR

curl "$SUPABASE_URL/storage/v1/object/list/$BUCKET" \
  -H "apikey: $SUPABASE_KEY" | \
  jq -r '.[].name' | \
  while read FILE; do
    echo "Downloading $FILE..."
    curl "$SUPABASE_URL/storage/v1/object/public/$BUCKET/$FILE" \
      -o "$OUTPUT_DIR/$FILE"
  done

Create MinIO Buckets

# Access MinIO Console: http://minio.localhost
# OR use mc CLI
docker exec -it $(docker ps -qf "name=minio") mc alias set local http://localhost:9000 minioadmin minioadmin

# Create buckets
docker exec -it $(docker ps -qf "name=minio") mc mb local/default
docker exec -it $(docker ps -qf "name=minio") mc mb local/avatars

# Set public policy (if needed)
docker exec -it $(docker ps -qf "name=minio") mc policy set download local/default

Upload to MinIO

docker exec -i $(docker ps -qf "name=minio") mc mirror ./storage-backup/default local/default

Phase 6: Edge Functions Migration

Estimated time: 1-2 hours

Supabase Edge Function

// supabase/functions/hello/index.ts
import { serve } from "https://deno.land/std@0.168.0/http/server.ts"

serve(async (req) => {
  const { name } = await req.json()
  return new Response(
    JSON.stringify({ message: `Hello ${name}!` }),
    { headers: { "Content-Type": "application/json" } }
  )
})

ɳSelf Function (Node.js)

// functions/src/hello.ts
import { Request, Response } from 'express'

export default async (req: Request, res: Response) => {
  const { name } = req.body
  res.json({ message: `Hello ${name}!` })
}

Deploy Functions

cd functions
npm install
nself restart --service=functions

# Test
curl http://functions.localhost/hello \
  -X POST \
  -H "Content-Type: application/json" \
  -d '{"name": "World"}'

Frontend Code Changes

REST API → GraphQL

Before (Supabase)

import { createClient } from '@supabase/supabase-js'

const supabase = createClient(URL, KEY)

// Fetch posts
const { data: posts } = await supabase
  .from('posts')
  .select('id, title, author(name)')
  .eq('published', true)

// Create post
const { data } = await supabase
  .from('posts')
  .insert({ title: 'New', content: 'Content' })

After (ɳSelf)

import { GraphQLClient, gql } from 'graphql-request'

const client = new GraphQLClient(API_URL)

// Fetch posts
const GET_POSTS = gql`
  query {
    posts(where: { published: { _eq: true } }) {
      id title author { name }
    }
  }
`
const { posts } = await client.request(GET_POSTS)

// Create post
const CREATE = gql`
  mutation($title: String!, $content: String!) {
    insert_posts_one(object: { title: $title, content: $content }) { id }
  }
`
await client.request(CREATE, { title: 'New', content: 'Content' })

Realtime Subscriptions

Before (Supabase)

const channel = supabase
  .channel('posts')
  .on('postgres_changes',
    { event: 'INSERT', table: 'posts' },
    (payload) => console.log(payload)
  )
  .subscribe()

After (ɳSelf)

const SUBSCRIPTION = gql`
  subscription {
    posts(order_by: { created_at: desc }, limit: 1) {
      id title content
    }
  }
`

const { data } = useSubscription(SUBSCRIPTION)

Common Pitfalls

1. RLS auth.uid() Not Working

Symptom: RLS policies fail with "column does not exist: auth.uid()"

Solution:

-- Replace auth.uid() with Hasura session variable
-- OLD: USING (auth.uid() = user_id)
-- NEW: USING (current_setting('hasura.user.id')::uuid = user_id)

2. Storage URLs Different

Solution: Update storage URLs in database

UPDATE posts SET image_url = REPLACE(
  image_url,
  'https://[project-ref].supabase.co/storage/v1/object/public/',
  'http://minio.localhost/'
);

3. PostgREST Query Syntax

Solution: Rewrite queries in GraphQL syntax (see Frontend Code Changes above)


Rollback Procedure

If migration fails:

  1. Keep Supabase active - Don't delete project until fully tested
  2. DNS rollback - Change DNS back to Supabase IP (5-60 minute propagation)
  3. Frontend rollback - Revert environment variables and redeploy
# Revert environment variables
NEXT_PUBLIC_SUPABASE_URL=https://[project-ref].supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=your-anon-key

# Rebuild and deploy
npm run build
vercel deploy

Performance Tuning

After migration, optimize for production:

# Database optimization
nself db analyze

# Enable connection pooling
PGBOUNCER_ENABLED=true

# Enable Redis caching
REDIS_ENABLED=true
AUTH_REDIS_ENABLED=true

Conclusion

Migrating from Supabase to ɳSelf requires:

  • Database export/import (straightforward)
  • RLS → Hasura permissions conversion (medium effort)
  • REST → GraphQL code changes (high effort)
  • Storage migration (medium effort)
  • Functions rewrite (medium effort)

Total Time: 8-16 hours

Recommended Approach:

  1. Migrate to staging first
  2. Test thoroughly (2-4 weeks)
  3. Migrate production during low-traffic period
  4. Keep Supabase running for 2 weeks as fallback

Need help? Check our support channels or join our Discord community for migration assistance.