This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Application

Out-of-the-box config templates to launch application use PostgreSQL as the core database.

PostgreSQL is the most popular database in the world, with countless classic software built on top of it. Pigsty provides ready-to-use provisioning templates for it.

Pigsty’s template utilizes external PostgreSQL, MinIO, and Etcd services managed and created by Pigsty, including full backup and recovery, high availability, monitoring and alerting, Infrastructure as Code (IaC), connection pooling, load balancing, and more. It also addresses “last mile” issues like infrastructure, Nginx forwarding, and certificate application. Compared to using Docker to launch an entire software stack, including the database, in a “toy mode,” it provides the capabilities needed for “enterprise-grade” solutions.


1 - Self-Hosting Dify

How to use Pigsty to build an AI Workflow LLMOps platform — Dify, and use external PostgreSQL, PGVector, Redis as storage?

Dify is a generative AI application innovation engine and open-source LLM application development platform. It provides capabilities from Agent construction to AI workflow orchestration, RAG retrieval, and model management, helping users easily build and operate generative AI-native applications.

Pigsty provides support for self-hosting Dify, allowing you to deploy Dify with a single command while storing critical state in externally managed PostgreSQL. You can use pgvector in the same PostgreSQL instance as a vector database, further simplifying deployment.

Current Pigsty v3.4 supports Dify version: v1.1.3


Quick Start

On a fresh Linux x86 / ARM server running a compatible distribution, execute:

curl -fsSL https://repo.pigsty.cc/get | bash; cd ~/pigsty 
./bootstrap                # Install Pigsty dependencies
./configure -c app/dify    # Use Dify configuration template 
vi pigsty.yml              # edit passwords, domain, keys, etc.

./install.yml              # Install Pigsty
./docker.yml               # Install Docker & Compose
./app.yml                  # Install Dify

Dify listens on port 5001 by default. You can access it via browser at http://<ip>:5001 and set up your initial user credentials to log in.

After Dify starts, you can install various extensions, configure system models, and begin using it!


Why Self-Host

There are many reasons to self-host Dify, but the primary motivation is data security. The DockerCompose template provided by Dify uses basic default database images, lacking enterprise-grade features like high availability, disaster recovery, monitoring, IaC, and PITR capabilities.

Pigsty elegantly solves these issues for Dify, deploying all components with a single command based on configuration files, and using mirrors to resolve China region access challenges. This makes Dify deployment and delivery incredibly smooth. It handles PostgreSQL master database, PGVector vector database, MinIO object storage, Redis, Prometheus monitoring, Grafana visualization, Nginx reverse proxy, and free HTTPS certificates in one go.

Pigsty ensures all Dify state is stored in externally managed services, including metadata in PostgreSQL and other data in the filesystem. Therefore, the Dify instance launched via Docker Compose becomes a stateless application that can be destroyed and rebuilt at any time, greatly simplifying operations.


Installation

Let’s start with single-node Dify deployment. We’ll cover production high-availability deployment methods later.

First, use Pigsty’s standard installation process to install the PostgreSQL instance required by Dify:

curl -fsSL https://repo.pigsty.cc/get | bash; cd ~/pigsty
./bootstrap               # Prepare Pigsty dependencies
./configure -c app/supa   # Use Supabase application template
vi pigsty.yml             # Edit config file, modify domain and passwords
./install.yml             # Install Pigsty and various databases

When you use the ./configure -c app/dify command, Pigsty automatically generates the configuration file based on the conf/app/dify.yml template and your current environment. You should modify passwords, domain, and other relevant parameters in the generated pigsty.yml configuration file according to your actual needs, then use ./install.yml to execute the standard installation process.

Next, run docker.yml to install Docker and Docker Compose, then use app.yml to complete Dify deployment:

./docker.yml              # Install Docker and Docker Compose
./app.yml                 # Deploy Dify stateless components using Docker

You can access the Dify Web management interface at http://<your_ip_address>:5001 on your local network.

Default username, email, and password will be prompted for setup on first login.

You can also use the locally resolved placeholder domain dify.pigsty, or follow the configuration below to use a real domain with HTTPS certificates.


Configuration

When you use the ./configure -c app/dify command for configuration, Pigsty automatically generates the configuration file based on the conf/app/dify.yml template and your current environment. Here’s a detailed explanation of the default configuration:

all:
  children:

    # the dify application
    dify:
      hosts: { 10.10.10.10: {} }
      vars:
        app: dify   # specify app name to be installed (in the apps)
        apps:       # define all applications
          dify:     # app name, should have corresponding ~/pigsty/app/dify folder
            file:   # data directory to be created
              - { path: /data/dify ,state: directory ,mode: 0755 }
            conf:   # override /opt/dify/.env config file

              # change domain, mirror, proxy, secret key
              NGINX_SERVER_NAME: dify.pigsty
              # A secret key for signing and encryption, gen with `openssl rand -base64 42` (CHANGE PASSWORD!)
              SECRET_KEY: sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U
              # expose DIFY nginx service with port 5001 by default
              DIFY_PORT: 5001
              # where to store dify files? the default is ./volume, we'll use another volume created above
              DIFY_DATA: /data/dify

              # proxy and mirror settings
              #PIP_MIRROR_URL: https://pypi.tuna.tsinghua.edu.cn/simple
              #SANDBOX_HTTP_PROXY: http://10.10.10.10:12345
              #SANDBOX_HTTPS_PROXY: http://10.10.10.10:12345

              # database credentials
              DB_USERNAME: dify
              DB_PASSWORD: difyai123456
              DB_HOST: 10.10.10.10
              DB_PORT: 5432
              DB_DATABASE: dify
              VECTOR_STORE: pgvector
              PGVECTOR_HOST: 10.10.10.10
              PGVECTOR_PORT: 5432
              PGVECTOR_USER: dify
              PGVECTOR_PASSWORD: difyai123456
              PGVECTOR_DATABASE: dify
              PGVECTOR_MIN_CONNECTION: 2
              PGVECTOR_MAX_CONNECTION: 10

    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dify ,password: difyai123456 ,pgbouncer: true ,roles: [ dbrole_admin ] ,superuser: true ,comment: dify superuser }
        pg_databases:
          - { name: dify ,owner: dify ,revokeconn: true ,comment: dify main database  }
        pg_hba_rules:
          - { user: dify ,db: all ,addr: 172.17.0.0/16  ,auth: pwd ,title: 'allow dify access from local docker network' }
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

  vars:                               # global variables
    version: v3.4.1                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    docker_enabled: true              # enable docker on app group
    #docker_registry_mirrors: ["https://docker.1ms.run"] # use mirror in mainland china

    proxy_env:                        # global proxy env when downloading packages & pull docker images
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.tsinghua.edu.cn"
      #http_proxy:  127.0.0.1:12345 # add your proxy env here for downloading packages or pull images
      #https_proxy: 127.0.0.1:12345 # usually the proxy is format as http://user:pass@proxy.xxx.com
      #all_proxy:   127.0.0.1:12345

    infra_portal: # domain names and upstream servers
      home         : { domain: h.pigsty }
      grafana      : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
      prometheus   : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
      alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
      blackbox     : { endpoint: "${admin_ip}:9115" }
      loki         : { endpoint: "${admin_ip}:3100" }
      #minio        : { domain: m.pigsty    ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }
      dify:                            # nginx server config for dify
        domain: dify.pigsty            # REPLACE WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:5001"   # dify service endpoint: IP:PORT
        websocket: true                # add websocket support
        certbot: dify.pigsty           # certbot cert name, apply with `make cert`

    #----------------------------------#
    # Credential: CHANGE THESE PASSWORDS
    #----------------------------------#
    #grafana_admin_username: admin
    grafana_admin_password: pigsty
    #pg_admin_username: dbuser_dba
    pg_admin_password: DBUser.DBA
    #pg_monitor_username: dbuser_monitor
    pg_monitor_password: DBUser.Monitor
    #pg_replication_username: replicator
    pg_replication_password: DBUser.Replicator
    #patroni_username: postgres
    patroni_password: Patroni.API
    #haproxy_admin_username: admin
    haproxy_admin_password: pigsty
    #minio_access_key: minioadmin
    minio_secret_key: minioadmin      # minio root secret key, `minioadmin` by default

    repo_extra_packages: [ pg17-main ]
    pg_version: 17

Checklist

Here’s a checklist of configuration items you need to focus on:

  • Hardware/Software: Prepare required machine resources: Linux x86_64/arm64 server, fresh installation of mainstream Linux operating systems
  • Network/Permissions: SSH passwordless login access, user has passwordless sudo privileges
  • Ensure machine has static IPv4 network address in internal network and can access internet
  • If accessing via public network, ensure you have an available domain name pointing to the current node’s public IP address
  • Ensure using app/dify configuration template and modify parameters as needed
    • configure -c app/dify, and enter node’s internal primary IP address, or specify via -i <primary_ip> command line parameter
    • Have you modified all password-related configuration parameters?【Optional】
    • Have you modified PostgreSQL cluster business user passwords and app configurations using these passwords?
      • Default username dify and password difyai123456 are generated by Pigsty for Dify, please modify according to actual situation
      • In Dify’s configuration block, please modify DB_USERNAME, DB_PASSWORD, PGVECTOR_USER, PGVECTOR_PASSWORD and other parameters accordingly
    • Have you modified Dify’s default encryption key?
      • You can use openssl rand -base64 42 to randomly generate a password string and fill it in the SECRET_KEY parameter
    • Have you modified the domain used by Dify?
      • Replace placeholder domain dify.pigsty with your actual domain, e.g. dify.pigsty.cc
      • You can use sed -ie 's/dify.pigsty/dify.pigsty.cc/g' pigsty.yml to modify Dify’s domain

Domain & SSL

If you want to use a real domain with HTTPS certificates, you need to modify in the pigsty.yml configuration file:

  • infra_portal parameter’s dify domain
  • Best to specify an email address certbot_email for receiving certificate expiration notifications
  • Configure Dify’s NGINX_SERVER_NAME parameter to specify your actual domain
all:
  children:                            # Cluster definition
    dify:                              # Dify group
      vars:                            # Dify group variables
        apps:                          # Application configuration
          dify:                        # Dify application definition
            conf:                      # Dify application configuration
              NGINX_SERVER_NAME: dify.pigsty

  vars:                                # Global parameters
    #certbot_sign: true                # Use Certbot to apply for free HTTPS certificate
    certbot_email: your@email.com      # Email for certificate application, used for expiration notifications, optional
    infra_portal:                      # Configure Nginx server
      dify:                            # Dify server definition
        domain: dify.pigsty            # Please replace with your own domain here!
        endpoint: "10.10.10.10:5001"   # Please specify Dify's IP and port here (default auto-configured)
        websocket: true                # Dify needs websocket enabled 
        certbot: dify.pigsty           # Specify Certbot certificate name

Use the following command to apply for Nginx certificates:

# Apply for certificates, can also manually execute /etc/nginx/sign-cert script
make cert

# The above Makefile shortcut command actually executes the following playbook tasks:
./infra.yml -t nginx_certbot,nginx_reload -e certbot_sign=true

Execute app.yml playbook to redeploy Dify service to make NGINX_SERVER_NAME configuration take effect.

./app.yml

File Backup

You can use restic to backup Dify’s filesystem. Dify’s data files are in the /data/dify directory. You can use the following commands to backup:

export RESTIC_REPOSITORY=/data/backups/dify   # Specify dify backup directory
export RESTIC_PASSWORD=some-strong-password   # Specify backup encryption password
mkdir -p ${RESTIC_REPOSITORY}                 # Create dify backup directory
restic init

After creating the Restic backup repository, you can use the following commands to backup Dify:

export RESTIC_REPOSITORY=/data/backups/dify   # Specify dify backup directory
export RESTIC_PASSWORD=some-strong-password   # Specify backup encryption password

restic backup /data/dify                      # Backup /dify data directory to repository
restic snapshots                              # View backup snapshot list
restic restore -t /data/dify 0b11f778         # Restore snapshot xxxxxx to /data/dify
restic check                                  # Periodically check repository integrity

Another more reliable way is to use JuiceFS to mount MinIO object storage to the /data/dify directory, so you can use MinIO/S3 to store file state.

If you want to store all data in PostgreSQL, consider using JuiceFS to store filesystem data in PostgreSQL:

For example, you can create another dify_fs database and use it as JuiceFS’s metadata storage:

METAURL=postgres://dify:difyai123456@:5432/dify_fs
OPTIONS=(
  --storage postgres
  --bucket :5432/dify_fs
  --access-key dify
  --secret-key difyai123456
  ${METAURL}
  jfs
)
juicefs format "${OPTIONS[@]}"         # Create a PG filesystem
juicefs mount ${METAURL} /data/dify -d # Mount to /data/dify directory in background
juicefs bench /data/dify               # Test performance
juicefs umount /data/dify              # Stop mounting

Reference Reading




2 - Self-Hosting Odoo

How to self-hosting the eopn source ERP – odoo

Odoo is an open-source enterprise resource planning (ERP) software that provides a full suite of business applications, including CRM, sales, purchasing, inventory, production, accounting, and other management functions. Odoo is a typical web application that uses PostgreSQL as the underlying database.

All your business on one platform, Simple, efficient, yet affordable


Get Started

curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty 
./bootstrap                # install ansible
./configure -c app/odoo    # use odoo config (please CHANGE CREDENTIALS in pigsty.yml)
./install.yml              # install pigsty
./docker.yml               # install docker compose
./app.yml                  # launch odoo stateless part with docker

Config Template

The conf/app/odoo.yml defines a template configuration file that defines the resources required for a single Odoo instance.

all:
  children:

    # the odoo application (default username & password: admin/admin)
    odoo:
      hosts: { 10.10.10.10: {} }
      vars:
        app: odoo   # specify app name to be installed (in the apps)
        apps:       # define all applications
          odoo:     # app name, should have corresponding ~/app/odoo folder
            file:   # optional directory to be created
              - { path: /data/odoo         ,state: directory, owner: 100, group: 101 }
              - { path: /data/odoo/webdata ,state: directory, owner: 100, group: 101 }
              - { path: /data/odoo/addons  ,state: directory, owner: 100, group: 101 }
            conf:   # override /opt/<app>/.env config file
              PG_HOST: 10.10.10.10            # postgres host
              PG_PORT: 5432                   # postgres port
              PG_USERNAME: odoo               # postgres user
              PG_PASSWORD: DBUser.Odoo        # postgres password
              ODOO_PORT: 8069                 # odoo app port
              ODOO_DATA: /data/odoo/webdata   # odoo webdata
              ODOO_ADDONS: /data/odoo/addons  # odoo plugins
              ODOO_DBNAME: odoo               # odoo database name
              ODOO_VERSION: 18.0              # odoo image version

    # the odoo database
    pg-odoo:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-odoo
        pg_users:
          - { name: odoo    ,password: DBUser.Odoo ,pgbouncer: true ,roles: [ dbrole_admin ] ,createdb: true ,comment: admin user for odoo service }
          - { name: odoo_ro ,password: DBUser.Odoo ,pgbouncer: true ,roles: [ dbrole_readonly ]  ,comment: read only user for odoo service  }
          - { name: odoo_rw ,password: DBUser.Odoo ,pgbouncer: true ,roles: [ dbrole_readwrite ] ,comment: read write user for odoo service }
        pg_databases:
          - { name: odoo ,owner: odoo ,revokeconn: true ,comment: odoo main database  }
        pg_hba_rules:
          - { user: all ,db: all ,addr: 172.17.0.0/16  ,auth: pwd ,title: 'allow access from local docker network' }
          - { user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes' }

    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

  vars:                               # global variables
    version: v3.3.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    docker_enabled: true              # enable docker on app group
    #docker_registry_mirrors: ["https://docker.m.daocloud.io"] # use dao cloud mirror in mainland china
    proxy_env:                        # global proxy env when downloading packages & pull docker images
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.tsinghua.edu.cn"
      #http_proxy:  127.0.0.1:12345 # add your proxy env here for downloading packages or pull images
      #https_proxy: 127.0.0.1:12345 # usually the proxy is format as http://user:pass@proxy.xxx.com
      #all_proxy:   127.0.0.1:12345

    infra_portal: # domain names and upstream servers
      home         : { domain: h.pigsty }
      grafana      : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
      prometheus   : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
      alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
      blackbox     : { endpoint: "${admin_ip}:9115" }
      loki         : { endpoint: "${admin_ip}:3100" }
      minio        : { domain: m.pigsty    ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }
      odoo         : { domain: odoo.pigsty, endpoint: "127.0.0.1:8069"   ,websocket: true }  #cert: /path/to/crt ,key: /path/to/key
      # setup your own domain name here ^^^, or use default domain name, or ip + 8069 port direct access
      # certbot --nginx --agree-tos --email your@email.com -n -d odoo.your.domain    # replace with your email & odoo domain

    #----------------------------------#
    # Credential: CHANGE THESE PASSWORDS
    #----------------------------------#
    #grafana_admin_username: admin
    grafana_admin_password: pigsty
    #pg_admin_username: dbuser_dba
    pg_admin_password: DBUser.DBA
    #pg_monitor_username: dbuser_monitor
    pg_monitor_password: DBUser.Monitor
    #pg_replication_username: replicator
    pg_replication_password: DBUser.Replicator
    #patroni_username: postgres
    patroni_password: Patroni.API
    #haproxy_admin_username: admin
    haproxy_admin_password: pigsty

    repo_modules: infra,node,pgsql,docker
    repo_packages: [ node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, docker ]
    repo_extra_packages: [ pg17-main ]
    pg_version: 17

Get Started

Check .env file for configurable environment variables:

# https://hub.docker.com/_/odoo#
PG_HOST=10.10.10.10
PG_PORT=5432
PG_USER=dbuser_odoo
PG_PASS=DBUser.Odoo
ODOO_PORT=8069

Then launch odoo with:

make up  # docker compose up

Visit http://ddl.pigsty or http://10.10.10.10:8887

Makefile

make up         # pull up odoo with docker compose in minimal mode
make run        # launch odoo with docker , local data dir and external PostgreSQL
make view       # print odoo access point
make log        # tail -f odoo logs
make info       # introspect odoo with jq
make stop       # stop odoo container
make clean      # remove odoo container
make pull       # pull latest odoo image
make rmi        # remove odoo image
make save       # save odoo image to /tmp/docker/odoo.tgz
make load       # load odoo image from /tmp/docker/odoo.tgz

Use External PostgreSQL

You can use external PostgreSQL for Odoo. Odoo will create its own database during setup, so you don’t need to do that

pg_users: [ { name: dbuser_odoo ,password: DBUser.Odoo ,pgbouncer: true ,roles: [ dbrole_admin ]    ,comment: admin user for odoo database } ]
pg_databases: [ { name: odoo ,owner: dbuser_odoo ,revokeconn: true ,comment: odoo primary database } ]

And create business user & database with:

bin/pgsql-user  pg-meta  dbuser_odoo
#bin/pgsql-db    pg-meta  odoo     # odoo will create the database during setup

Check connectivity:

psql postgres://dbuser_odoo:DBUser.Odoo@10.10.10.10:5432/odoo

Expose Odoo Service

Expose odoo seb service via nginx portal:

    infra_portal:                     # domain names and upstream servers
      home         : { domain: h.pigsty }
      grafana      : { domain: g.pigsty    ,endpoint: "${admin_ip}:3000" , websocket: true }
      prometheus   : { domain: p.pigsty    ,endpoint: "${admin_ip}:9090" }
      alertmanager : { domain: a.pigsty    ,endpoint: "${admin_ip}:9093" }
      blackbox     : { endpoint: "${admin_ip}:9115" }
      loki         : { endpoint: "${admin_ip}:3100" }
      odoo         : { domain: odoo.pigsty, endpoint: "127.0.0.1:8069", websocket: true }  # <------ add this line
./infra.yml -t nginx   # setup nginx infra portal

Odoo Addons

There are lots of Odoo modules available in the community, you can install them by downloading and placing them in the addons folder.

volumes:
  - ./addons:/mnt/extra-addons

You can mount the ./addons dir to the /mnt/extra-addons in the container, then download and unzip to the addons folder,

To enable addon module, first entering the Developer mode

Settings -> Generic Settings -> Developer Tools -> Activate the developer Mode

Then goes to the > Apps -> Update Apps List, then you can find the extra addons and install from the panel.

Frequently used free addons: Accounting Kit


Demo

Check public demo: http://odoo.pigsty.io, username: test@pigsty.io, password: pigsty

If you want to access odoo through SSL, you have to trust files/pki/ca/ca.crt on your browser (or use the dirty hack thisisunsafe in chrome)

3 - Self-Hosting Supabase on PostgreSQL

A tutorial for self-hosting production-grade supabase on local/cloud VM/BMs.

Supabase is great, own your own Supabase is even better. Here’s a comprehensive tutorial for self-hosting production-grade supabase on local/cloud VM/BMs.

curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty 
./bootstrap                # install ansible
./configure -c app/supa    # use supabase config (please CHANGE CREDENTIALS in pigsty.yml)
vi pigsty.yml              # edit domain name, password, keys,... 
./install.yml              # install pigsty
./docker.yml               # install docker compose
./app.yml                  # launch supabase stateless part with docker

What is Supabase?

Supabase is an open-source Firebase alternative, a Backend as a Service (BaaS).

Supabase wraps PostgreSQL kernel and vector extensions, alone with authentication, realtime subscriptions, edge functions, object storage, and instant REST and GraphQL APIs from your postgres schema. It let you skip most backend work, requiring only database design and frontend skills to ship quickly.

Currently, Supabase may be the most popular open-source project in the PostgreSQL ecosystem, boasting over 74,000 stars on GitHub. And become quite popular among developers, and startups, since they have a generous free plan, just like cloudflare & neon.


Why Self-Hosting?

Supabase’s slogan is: “Build in a weekend, Scale to millions”. It has great cost-effectiveness in small scales (4c8g) indeed. But there is no doubt that when you really grow to millions of users, some may choose to self-hosting their own Supabase —— for functionality, performance, cost, and other reasons.

That’s where Pigsty comes in. Pigsty provides a complete one-click self-hosting solution for Supabase. Self-hosted Supabase can enjoy full PostgreSQL monitoring, IaC, PITR, and high availability capability,

You can run the latest PostgreSQL 17(,16,15) kernels, (supabase is using the 15 currently), alone with 404 PostgreSQL extensions out-of-the-box. Run on mainstream Linus OS distros with production grade HA PostgreSQL, MinIO, Prometheus & Grafana Stack for observability, and Nginx for reverse proxy.

Since most of the supabase maintained extensions are not available in the official PGDG repo, we have compiled all the RPM/DEBs for these extensions and put them in the Pigsty repo:

  • pg_graphql: GraphQL support in PostgreSQL (Rust extension via PIGSTY)
  • pg_jsonschema: JSON Schema validation (Rust extension via PIGSTY)
  • wrappers: Supabase’s foreign data source wrappers bundle (Rust extension via PIGSTY)
  • index_advisor: Query index advisor (SQL extension via PIGSTY)
  • pg_net: Async non-blocking HTTP/HTTPS requests in SQL (C extension via PIGSTY)
  • vault: Store encrypted credentials in Vault (C extension via PIGSTY)
  • pgjwt: PostgreSQL implementation of JSON Web Token API (SQL extension via PIGSTY)
  • pgsodium: Table data encryption storage TDE (C extension via PIGSTY)
  • supautils: Secure database clusters in cloud environments (C extension via PIGSTY)
  • pg_plan_filter: Block specific queries using execution plan cost filtering (C extension via PIGSTY)

Everything is under your control, you have the ability and freedom to scale PGSQL, MinIO, and Supabase itself. And take full advantage of the performance and cost advantages of modern hardware like Gen5 NVMe SSD.

All you need is prepare a VM with several commands and wait for 10 minutes….


Get Started

First, download & install pigsty as usual, with the supa config template:

curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty 
./bootstrap                # install ansible
./configure -c app/dify    # use dify config (please CHANGE CREDENTIALS in pigsty.yml)
vi pigsty.yml              # edit config file
./install.yml              # install pigsty

Please change the pigsty.yml config file according to your need before deploying Supabase. (Credentials) For dev/test/demo purposes, we will just skip that, and comes back later.

Then, run the docker.yml & app.yml to launch stateless part of supabase.

./docker.yml               # install docker compose
./app.yml                  # launch dify stateless part with docker

You can access the supabase API / Web UI through the 8000/8443 directly.

with configured DNS, or a local /etc/hosts entry, you can also use the default supa.pigsty domain name via the 80/443 infra portal.

Credentials for Supabase Studio: supabase : pigsty

asciicast


Architecture

Pigsty’s supabase is based on the Supabase Docker Compose Template, with some slight modifications to fit-in Pigsty’s default ACL model.

The stateful part of this template is replaced by Pigsty’s managed PostgreSQL cluster and MinIO cluster. The container part are stateless, so you can launch / destroy / run multiple supabase containers on the same stateful PGSQL / MINIO cluster simultaneously to scale out.

The built-in supa.yml config template will create a single-node supabase, with a singleton PostgreSQL and SNSD MinIO server. You can use Multinode PostgreSQL Clusters and MNMD MinIO Clusters / external S3 service instead in production, we will cover that later.


Config Detail

Here are checklists for self-hosting

  • Hardware: necessary VM/BM resources, one node at least, 3-4 are recommended for HA.
  • Linux OS: Linux x86_64 server with fresh installed Linux, check compatible distro
  • Network: Static IPv4 address which can be used as node identity
  • Admin User: nopass ssh & sudo are recommended for admin user
  • Conf Template: Use the supa config template, if you don’t know how to manually configure pigsty

The built-in conf/app/supa.yml config template is shown below.


The supa Config Template
all:
  children:

    # the supabase stateless (default username & password: supabase/pigsty)
    supa:
      hosts:
        10.10.10.10: {}
      vars:
        app: supabase # specify app name (supa) to be installed (in the apps)
        apps:         # define all applications
          supabase:   # the definition of supabase app
            conf:     # override /opt/supabase/.env
              # IMPORTANT: CHANGE JWT_SECRET AND REGENERATE CREDENTIAL ACCORDING!!!!!!!!!!!
              # https://supabase.com/docs/guides/self-hosting/docker#securing-your-services
              JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
              ANON_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
              SERVICE_ROLE_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
              DASHBOARD_USERNAME: supabase
              DASHBOARD_PASSWORD: pigsty

              # postgres connection string (use the correct ip and port)
              POSTGRES_HOST: 10.10.10.10      # point to the local postgres node
              POSTGRES_PORT: 5436             # access via the 'default' service, which always route to the primary postgres
              POSTGRES_DB: postgres           # the supabase underlying database
              POSTGRES_PASSWORD: DBUser.Supa  # password for supabase_admin and multiple supabase users

              # expose supabase via domain name
              SITE_URL: https://supa.pigsty                # <------- Change This to your external domain name
              API_EXTERNAL_URL: https://supa.pigsty        # <------- Otherwise the storage api may not work!
              SUPABASE_PUBLIC_URL: https://supa.pigsty     # <------- DO NOT FORGET TO PUT IT IN infra_portal!

              # if using s3/minio as file storage
              S3_BUCKET: supa
              S3_ENDPOINT: https://sss.pigsty:9000
              S3_ACCESS_KEY: supabase
              S3_SECRET_KEY: S3User.Supabase
              S3_FORCE_PATH_STYLE: true
              S3_PROTOCOL: https
              S3_REGION: stub
              MINIO_DOMAIN_IP: 10.10.10.10  # sss.pigsty domain name will resolve to this ip statically

              # if using SMTP (optional)
              #SMTP_ADMIN_EMAIL: admin@example.com
              #SMTP_HOST: supabase-mail
              #SMTP_PORT: 2500
              #SMTP_USER: fake_mail_user
              #SMTP_PASS: fake_mail_password
              #SMTP_SENDER_NAME: fake_sender
              #ENABLE_ANONYMOUS_USERS: false


    # infra cluster for proxy, monitor, alert, etc..
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

    # etcd cluster for ha postgres
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    # minio cluster, s3 compatible object storage
    minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

    # pg-meta, the underlying postgres database for supabase
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_users:
          # supabase roles: anon, authenticated, dashboard_user
          - { name: anon           ,login: false }
          - { name: authenticated  ,login: false }
          - { name: dashboard_user ,login: false ,replication: true ,createdb: true ,createrole: true }
          - { name: service_role   ,login: false ,bypassrls: true }
          # supabase users: please use the same password
          - { name: supabase_admin             ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: true   ,roles: [ dbrole_admin ] ,superuser: true ,replication: true ,createdb: true ,createrole: true ,bypassrls: true }
          - { name: authenticator              ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false  ,roles: [ dbrole_admin, authenticated ,anon ,service_role ] }
          - { name: supabase_auth_admin        ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false  ,roles: [ dbrole_admin ] ,createrole: true }
          - { name: supabase_storage_admin     ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false  ,roles: [ dbrole_admin, authenticated ,anon ,service_role ] ,createrole: true }
          - { name: supabase_functions_admin   ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false  ,roles: [ dbrole_admin ] ,createrole: true }
          - { name: supabase_replication_admin ,password: 'DBUser.Supa' ,replication: true ,roles: [ dbrole_admin ]}
          - { name: supabase_read_only_user    ,password: 'DBUser.Supa' ,bypassrls: true ,roles: [ dbrole_readonly, pg_read_all_data ] }
        pg_databases:
          - name: postgres
            baseline: supabase.sql
            owner: supabase_admin
            comment: supabase postgres database
            schemas: [ extensions ,auth ,realtime ,storage ,graphql_public ,supabase_functions ,_analytics ,_realtime ]
            extensions:
              - { name: pgcrypto  ,schema: extensions } # cryptographic functions
              - { name: pg_net    ,schema: extensions } # async HTTP
              - { name: pgjwt     ,schema: extensions } # json web token API for postgres
              - { name: uuid-ossp ,schema: extensions } # generate universally unique identifiers (UUIDs)
              - { name: pgsodium        }               # pgsodium is a modern cryptography library for Postgres.
              - { name: supabase_vault  }               # Supabase Vault Extension
              - { name: pg_graphql      }               # pg_graphql: GraphQL support
              - { name: pg_jsonschema   }               # pg_jsonschema: Validate json schema
              - { name: wrappers        }               # wrappers: FDW collections
              - { name: http            }               # http: allows web page retrieval inside the database.
              - { name: pg_cron         }               # pg_cron: Job scheduler for PostgreSQL
              - { name: timescaledb     }               # timescaledb: Enables scalable inserts and complex queries for time-series data
              - { name: pg_tle          }               # pg_tle: Trusted Language Extensions for PostgreSQL
              - { name: vector          }               # pgvector: the vector similarity search
              - { name: pgmq            }               # pgmq: A lightweight message queue like AWS SQS and RSMQ
        # supabase required extensions
        pg_libs: 'timescaledb, plpgsql, plpgsql_check, pg_cron, pg_net, pg_stat_statements, auto_explain, pg_tle, plan_filter'
        pg_parameters:
          cron.database_name: postgres
          pgsodium.enable_event_trigger: off
        pg_hba_rules: # supabase hba rules, require access from docker network
          - { user: all ,db: postgres  ,addr: intra         ,auth: pwd ,title: 'allow supabase access from intranet'    }
          - { user: all ,db: postgres  ,addr: 172.17.0.0/16 ,auth: pwd ,title: 'allow access from local docker network' }
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am


  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:
    version: v3.4.1                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    docker_enabled: true              # enable docker on app group
    #docker_registry_mirrors: ["https://docker.1ms.run"] # use mirror in mainland china

    proxy_env:                        # global proxy env when downloading packages & pull docker images
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.tsinghua.edu.cn"
      #http_proxy:  127.0.0.1:12345 # add your proxy env here for downloading packages or pull images
      #https_proxy: 127.0.0.1:12345 # usually the proxy is format as http://user:pass@proxy.xxx.com
      #all_proxy:   127.0.0.1:12345

    certbot_email: your@email.com     # your email address for applying free let's encrypt ssl certs
    infra_portal:                     # domain names and upstream servers
      home         : { domain: h.pigsty }
      grafana      : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
      prometheus   : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
      alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
      minio        : { domain: m.pigsty ,endpoint: "10.10.10.10:9001", https: true, websocket: true }
      blackbox     : { endpoint: "${admin_ip}:9115" }
      loki         : { endpoint: "${admin_ip}:3100" }  # expose supa studio UI and API via nginx
      supa :                          # nginx server config for supabase
        domain: supa.pigsty           # REPLACE WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:8000"  # supabase service endpoint: IP:PORT
        websocket: true               # add websocket support
        certbot: supa.pigsty          # certbot cert name, apply with `make cert`

    #----------------------------------#
    # Credential: CHANGE THESE PASSWORDS
    #----------------------------------#
    #grafana_admin_username: admin
    grafana_admin_password: pigsty
    #pg_admin_username: dbuser_dba
    pg_admin_password: DBUser.DBA
    #pg_monitor_username: dbuser_monitor
    pg_monitor_password: DBUser.Monitor
    #pg_replication_username: replicator
    pg_replication_password: DBUser.Replicator
    #patroni_username: postgres
    patroni_password: Patroni.API
    #haproxy_admin_username: admin
    haproxy_admin_password: pigsty
    #minio_access_key: minioadmin
    minio_secret_key: minioadmin      # minio root secret key, `minioadmin` by default, also change pgbackrest_repo.minio.s3_key_secret

    # use minio as supabase file storage, single node single driver mode for demonstration purpose
    minio_buckets: [ { name: pgsql }, { name: supa } ]
    minio_users:
      - { access_key: dba , secret_key: S3User.DBA, policy: consoleAdmin }
      - { access_key: pgbackrest , secret_key: S3User.Backup,   policy: readwrite }
      - { access_key: supabase   , secret_key: S3User.Supabase, policy: readwrite }
    minio_endpoint: https://sss.pigsty:9000    # explicit overwrite minio endpoint with haproxy port
    node_etc_hosts: ["10.10.10.10 sss.pigsty"] # domain name to access minio from all nodes (required)

    # use minio as default backup repo for PostgreSQL
    pgbackrest_method: minio          # pgbackrest repo method: local,minio,[user-defined...]
    pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                          # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backup when using local fs repo
      minio:                          # optional minio repo for pgbackrest
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_key: pgbackrest            # minio user access key for pgbackrest
        s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest <------------------ HEY, DID YOU CHANGE THIS?
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `/pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                      # Enable block incremental backup
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'  <----- HEY, DID YOU CHANGE THIS?
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for last 14 days

    pg_version: 17
    repo_extra_packages: [pg17-core ,pg17-time ,pg17-gis ,pg17-rag ,pg17-fts ,pg17-olap ,pg17-feat ,pg17-lang ,pg17-type ,pg17-util ,pg17-func ,pg17-admin ,pg17-stat ,pg17-sec ,pg17-fdw ,pg17-sim ,pg17-etl ]
    pg_extensions: [ pg17-time ,pg17-gis ,pg17-rag ,pg17-fts ,pg17-feat ,pg17-lang ,pg17-type ,pg17-util ,pg17-func ,pg17-admin ,pg17-stat ,pg17-sec ,pg17-fdw ,pg17-sim ,pg17-etl, pg_mooncake, pg_analytics, pg_parquet ] #,pg17-olap]

For advanced topics, we may need to modify the configuration file to fit our needs.


Security Enhancement

For security reasons, you should change the default passwords in the pigsty.yml config file.

Supabase will use PostgreSQL & MinIO as its backend, so also change the following passwords for supabase business users:

  • pg_users: password for supabase business users in postgres
  • minio_users: minioadmin, MinIO business user’s password

The pgbackrest will take backups and WALs to MinIO, so also change the following passwords reference

PLEASE check the Supabase Self-Hosting: Generate API Keys to generate supabase credentials:

  • jwt_secret: a secret key with at least 40 characters
  • anon_key: a jwt token generate for anonymous users, based on jwt_secret
  • service_role_key: a jwt token generate for elevated service roles, based on jwt_secret
  • dashboard_username: supabase studio web portal username, supabase by default
  • dashboard_password: supabase studio web portal password, pigsty by default

If you have chanaged the default password for PostgreSQL and MinIO, you have to update the following parameters as well:


Domain Name and HTTPS

If you’re using Supabase on your local machine or within a LAN, you can directly connect to Supabase via IP:Port through Kong’s exposed HTTP port 8000.

You can use a locally resolved domain name, but for serious production deployments, we recommend using a real domain name + HTTPS to access Supabase. In this case, you typically need to prepare the following:

  • Your server should have a public IP address
  • Purchase a domain name, use the DNS resolution services provided by cloud/DNS/CDN providers to point it to your installation node’s public IP (lower alternative: local /etc/hosts)
  • Apply for a certificate, using free HTTPS certificates issued by certificate authorities like Let’s Encrypt for encrypted communication (lower alternative: default self-signed certificates, manually trusted)

You can refer to the certbot tutorial to apply for a free HTTPS certificate. Here we assume your custom domain name is: supa.pigsty.cc, then you should modify the supa domain in infra_portal like this:

all:
  vars:
    infra_portal:
      supa :
        domain: supa.pigsty.cc        # replace with your own domain!
        endpoint: "10.10.10.10:8000"
        websocket: true
        certbot: supa.pigsty.cc       # certificate name, usually the same as the domain name

If the domain name has been resolved to your server’s public IP, then you can automatically complete the certificate application and application in the Pigsty directory by executing the following command:

make cert

In addition to the Pigsty component passwords, you also need to modify the domain name related configurations of Supabase, including:

Configure them to your custom domain name, for example: supa.pigsty.cc, then apply the configuration again:

./app.yml -t app_config,app_launch

As a lower alternative, you can use a local domain name to access Supabase. When using a local domain name, you can configure the resolution of supa.pigsty in the browser’s /etc/hosts or LAN DNS, pointing it to the public IP address of the installation node. The Nginx on the Pigsty management node will apply for a self-signed certificate for this domain name (the browser will display Unsecure), and forward the request to the 8000 port of Kong, which is processed by Supabase.


MinIO or External S3

Pigsty’s self-hosting supabase will use a local SNSD MinIO server, which is used by Supabase itself for object storage, and by PostgreSQL for backups. For production use, you should consider using a HA MNMD MinIO cluster or an external S3 compatible service instead.

We recommend using an external S3 when:

  • you just have one single server available, then external s3 gives you a minimal disaster recovery guarantee, with RTO in hours and RPO in MBs.
  • you are operating in the cloud, then using S3 directly is recommended rather than wrap expensively EBS with MinIO

The terraform/spec/aliyun-meta-s3.tf provides an example of how to provision a single node alone with an S3 bucket.

To use an external S3 compatible service, you’ll have to update two related references in the pigsty.yml config.

First, let’s update the S3 related configurations in all.children.supa.vars.apps.[supabase].conf, pointing it to the external S3 compatible service:

# if using s3/minio as file storage
S3_BUCKET: supa
S3_ENDPOINT: https://sss.pigsty:9000
S3_ACCESS_KEY: supabase
S3_SECRET_KEY: S3User.Supabase
S3_FORCE_PATH_STYLE: true
S3_PROTOCOL: https
S3_REGION: stub
MINIO_DOMAIN_IP: 10.10.10.10  # sss.pigsty domain name will resolve to this ip statically

Then, reload the supabase service with the following command:

./app.yml -t app_config,app_launch

You can also use S3 as the backup repository for PostgreSQL, in all.vars.pgbackrest_repo add a new aliyun backup repository definition:

all:
  vars:
    pgbackrest_method: aliyun          # pgbackrest backup method: local,minio,[other user-defined repositories...]
    pgbackrest_repo:                   # pgbackrest backup repository: https://pgbackrest.org/configuration.html#section-repository
      aliyun:                          # define a new backup repository aliyun
        type: s3                       # aliyun oss is s3-compatible object storage
        s3_endpoint: oss-cn-beijing-internal.aliyuncs.com
        s3_region: oss-cn-beijing 
        s3_bucket: pigsty-oss
        s3_key: xxxxxxxxxxxxxx
        s3_key_secret: xxxxxxxx
        s3_uri_style: host
        
        path: /pgbackrest
        bundle: y
        cipher_type: aes-256-cbc
        cipher_pass: PG.${pg_cluster}   # set a cipher password bound to the cluster name
        retention_full_type: time 
        retention_full: 14

Then, specify the use of the aliyun backup repository in all.vars.pgbackrest_mehod, and reset the pgBackrest backup:

./pgsql.yml -t pgbackrest

Pigsty will switch the backup repository to the external object storage.


Sending Mail with SMTP

Some Supabase features require email. For production use, I’d recommend using an external SMTP service. Since self-hosted SMTP servers often result in rejected or spam-flagged emails.

To do this, modify the Supabase configuration and add SMTP credentials:

all:
  children:
    supa:            # supa group
      vars:          # supa group vars
        apps:        # supa group app list
          supabase:  # the supabase app
            conf:    # the supabase app conf entries
              SMTP_HOST: smtpdm.aliyun.com:80
              SMTP_PORT: 80
              SMTP_USER: no_reply@mail.your.domain.com
              SMTP_PASS: your_email_user_password
              SMTP_SENDER_NAME: MySupabase
              SMTP_ADMIN_EMAIL: adminxxx@mail.your.domain.com
              ENABLE_ANONYMOUS_USERS: false

And don’t forget to reload the supabase service with ./app.yml -t app_config,app_launch


Backup Strategy

In the supabase template, Pigsty has already used a daily full backup at 01:00 AM, you can refer to the backup/restore to modify the backup strategy.

all:
  children:
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta  # daily full backup at 01:00 AM
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ]

Then, apply the Crontab configuration to the nodes with the following command:

./node.yml -t node_crontab

More about backup strategy, please refer to Backup Strategy


True High Availability

The default single-node deployment (with external S3) provide a minimal disaster recovery guarantee, with RTO in hours and RPO in MBs.

To achieve RTO < 30s and zero data loss, you need a multi-node high availability cluster with at least 3-nodes.

Which involves high availability for these components:

  • ETCD: DCS requires at least three nodes to tolerate one node failure.
  • PGSQL: PGSQL synchronous commit mode recommends at least three nodes.
  • INFRA: It’s good to have two or three copies of observability stack.
  • Supabase itself can also have multiple replicas to achieve high availability.

We recommend you to refer to the trio and safe config to upgrade your cluster to three nodes or more.

In this case, you also need to modify the access points for PostgreSQL and MinIO to use the DNS / L2 VIP / HAProxy HA access points.