Complete guide for migrating from Supabase to ɳSelf
Supabase and ɳSelf both use PostgreSQL as their foundation, but differ in their approach and architecture:
| Aspect | Supabase | ɳSelf |
|---|---|---|
| API Layer | PostgREST (REST-first) | Hasura GraphQL (GraphQL-first) |
| Authentication | GoTrue | nHost Auth |
| Storage | Supabase Storage | MinIO (S3-compatible) |
| Functions | Deno Edge Functions | Node.js/Deno Functions |
| Realtime | Custom Realtime server | GraphQL subscriptions |
Before you start, ensure you have:
# Install nself
curl -sSL https://install.nself.org | bash
# Install Supabase CLI
npm install -g supabase
# Install PostgreSQL client tools
brew install postgresql # macOS
sudo apt-get install postgresql-client # Ubuntu
# Install jq for JSON processing
brew install jq # macOS
sudo apt-get install jq # UbuntuEstimated time: 30 minutes
mkdir supabase-migration && cd supabase-migration
nself init --wizardPROJECT_NAME=my-supabase-migration
ENV=dev
BASE_DOMAIN=localhost
# Database
POSTGRES_DB=myapp_db
POSTGRES_USER=postgres
POSTGRES_PASSWORD=your-secure-password
# Hasura
HASURA_GRAPHQL_ADMIN_SECRET=your-admin-secret
HASURA_GRAPHQL_JWT_SECRET={"type":"HS256","key":"your-jwt-secret-min-32-chars"}
# Auth
AUTH_SERVER_URL=http://auth.localhost
AUTH_CLIENT_URL=http://localhost:3000
# Storage
MINIO_ENABLED=true
MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=minioadmin
# Optional services
REDIS_ENABLED=true
FUNCTIONS_ENABLED=true
MAILPIT_ENABLED=truenself build
nself start
nself doctor # Verify all services are runningEstimated time: 2-3 hours
# Method 1: Supabase CLI (recommended)
supabase db dump --project-id your-project-id > supabase-dump.sql
# Method 2: pg_dump directly
pg_dump "postgresql://postgres:[password]@db.[project-ref].supabase.co:5432/postgres" > supabase-dump.sqlcat supabase-dump.sql | \
grep -v "supabase_functions" | \
grep -v "supabase_migrations" | \
grep -v "pg_graphql" | \
sed 's/supabase_admin/postgres/g' > cleaned-dump.sqlImportant: Preserve public and auth schemas. Remove supabase_functions and realtime schemas.
nself db import cleaned-dump.sql
# Verify import
nself db shell-- In psql shell
\dt public.* -- List public tables
\dt auth.* -- List auth tables
SELECT COUNT(*) FROM auth.users; -- Verify data
\qEstimated time: 1-2 hours
If you imported the database dump, the auth.users table is already migrated. Verify:
nself db shellSELECT id, email, created_at FROM auth.users LIMIT 10;
SELECT COUNT(*) FROM auth.users;Critical: Password hashes may not be compatible. Users must reset passwords for security.
# Send password reset emails to all users
cat > reset-passwords.sh << 'EOF'
#!/bin/bash
AUTH_URL="http://auth.localhost/v1"
EMAILS=$(nself db shell -c "SELECT email FROM auth.users;" | tail -n +3 | head -n -2)
for EMAIL in $EMAILS; do
echo "Sending reset to: $EMAIL"
curl -X POST "$AUTH_URL/user/password-reset" \
-H "Content-Type: application/json" \
-d "{\"email\": \"$EMAIL\"}"
done
EOF
chmod +x reset-passwords.sh
./reset-passwords.sh# In .env:
AUTH_PROVIDER_GITHUB_ENABLED=true
AUTH_PROVIDER_GITHUB_CLIENT_ID=your-github-client-id
AUTH_PROVIDER_GITHUB_CLIENT_SECRET=your-github-secret
AUTH_PROVIDER_GOOGLE_ENABLED=true
AUTH_PROVIDER_GOOGLE_CLIENT_ID=your-google-client-id
AUTH_PROVIDER_GOOGLE_CLIENT_SECRET=your-google-secret
# Restart auth service
nself restart --service=authUpdate OAuth redirect URIs in provider dashboards:
https://[project-ref].supabase.co/auth/v1/callbackhttp://auth.localhost/v1/auth/callback (dev) or https://auth.yourdomain.com/v1/auth/callback (prod)Estimated time: 2-4 hours
Supabase and ɳSelf both support PostgreSQL RLS, but ɳSelf primarily uses Hasura permissions (recommended).
Supabase RLS policy example:
CREATE POLICY "Users can view their own posts"
ON posts FOR SELECT
USING (auth.uid() = user_id);Hasura permission equivalent (in Hasura Console → Data → posts → Permissions):
table: posts
role: user
permissions:
select:
filter:
user_id: { _eq: X-Hasura-User-Id }
columns: [id, title, content, user_id, created_at]Advantages: Faster (no RLS overhead), easier to manage (GUI + GraphQL), better for complex permissions.
If you prefer SQL policies, modify auth.uid() to work with ɳSelf:
-- Supabase:
USING (auth.uid() = user_id)
-- ɳSelf:
USING (current_setting('hasura.user.id')::uuid = user_id)Estimated time: 2-3 hours
#!/bin/bash
SUPABASE_URL="https://[project-ref].supabase.co"
SUPABASE_KEY="your-anon-key"
BUCKET="default"
OUTPUT_DIR="./storage-backup/$BUCKET"
mkdir -p $OUTPUT_DIR
curl "$SUPABASE_URL/storage/v1/object/list/$BUCKET" \
-H "apikey: $SUPABASE_KEY" | \
jq -r '.[].name' | \
while read FILE; do
echo "Downloading $FILE..."
curl "$SUPABASE_URL/storage/v1/object/public/$BUCKET/$FILE" \
-o "$OUTPUT_DIR/$FILE"
done# Access MinIO Console: http://minio.localhost
# OR use mc CLI
docker exec -it $(docker ps -qf "name=minio") mc alias set local http://localhost:9000 minioadmin minioadmin
# Create buckets
docker exec -it $(docker ps -qf "name=minio") mc mb local/default
docker exec -it $(docker ps -qf "name=minio") mc mb local/avatars
# Set public policy (if needed)
docker exec -it $(docker ps -qf "name=minio") mc policy set download local/defaultdocker exec -i $(docker ps -qf "name=minio") mc mirror ./storage-backup/default local/defaultEstimated time: 1-2 hours
// supabase/functions/hello/index.ts
import { serve } from "https://deno.land/std@0.168.0/http/server.ts"
serve(async (req) => {
const { name } = await req.json()
return new Response(
JSON.stringify({ message: `Hello ${name}!` }),
{ headers: { "Content-Type": "application/json" } }
)
})// functions/src/hello.ts
import { Request, Response } from 'express'
export default async (req: Request, res: Response) => {
const { name } = req.body
res.json({ message: `Hello ${name}!` })
}cd functions
npm install
nself restart --service=functions
# Test
curl http://functions.localhost/hello \
-X POST \
-H "Content-Type: application/json" \
-d '{"name": "World"}'import { createClient } from '@supabase/supabase-js'
const supabase = createClient(URL, KEY)
// Fetch posts
const { data: posts } = await supabase
.from('posts')
.select('id, title, author(name)')
.eq('published', true)
// Create post
const { data } = await supabase
.from('posts')
.insert({ title: 'New', content: 'Content' })import { GraphQLClient, gql } from 'graphql-request'
const client = new GraphQLClient(API_URL)
// Fetch posts
const GET_POSTS = gql`
query {
posts(where: { published: { _eq: true } }) {
id title author { name }
}
}
`
const { posts } = await client.request(GET_POSTS)
// Create post
const CREATE = gql`
mutation($title: String!, $content: String!) {
insert_posts_one(object: { title: $title, content: $content }) { id }
}
`
await client.request(CREATE, { title: 'New', content: 'Content' })const channel = supabase
.channel('posts')
.on('postgres_changes',
{ event: 'INSERT', table: 'posts' },
(payload) => console.log(payload)
)
.subscribe()const SUBSCRIPTION = gql`
subscription {
posts(order_by: { created_at: desc }, limit: 1) {
id title content
}
}
`
const { data } = useSubscription(SUBSCRIPTION)Symptom: RLS policies fail with "column does not exist: auth.uid()"
Solution:
-- Replace auth.uid() with Hasura session variable
-- OLD: USING (auth.uid() = user_id)
-- NEW: USING (current_setting('hasura.user.id')::uuid = user_id)Solution: Update storage URLs in database
UPDATE posts SET image_url = REPLACE(
image_url,
'https://[project-ref].supabase.co/storage/v1/object/public/',
'http://minio.localhost/'
);Solution: Rewrite queries in GraphQL syntax (see Frontend Code Changes above)
If migration fails:
# Revert environment variables
NEXT_PUBLIC_SUPABASE_URL=https://[project-ref].supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=your-anon-key
# Rebuild and deploy
npm run build
vercel deployAfter migration, optimize for production:
# Database optimization
nself db analyze
# Enable connection pooling
PGBOUNCER_ENABLED=true
# Enable Redis caching
REDIS_ENABLED=true
AUTH_REDIS_ENABLED=trueMigrating from Supabase to ɳSelf requires:
Total Time: 8-16 hours
Recommended Approach:
Need help? Check our support channels or join our Discord community for migration assistance.