PIGSTY

Infra as Code

Declarative infrastructure and database management with YAML-driven configuration

Pigsty provides a declarative interface: Describe everything in a config file, and Pigsty operates it to the desired state with idempotent playbooks. It works like Kubernetes CRDs & Operators but for databases and infrastructures on any nodes: bare metal or virtual machines.

Infra as Code, Database as Code: Declarative API & Idempotent Playbooks, GitOPS works like a charm.


Declare Module

You can declare modules on a single node:

# infra cluster for proxy, monitor, alert, etc...
infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

# minio cluster, s3 compatible object storage
minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

# etcd cluster for ha postgres DCS
etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

# postgres example cluster: pg-meta
pg-meta: { hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }, vars: { pg_cluster: pg-meta } }

And apply with playbooks:

./infra.yml -l infra    # init infra module on node 10.10.10.10
./etcd.yml  -l etcd     # init etcd  module on node 10.10.10.10
./minio.yml -l minio    # init minio module on node 10.10.10.10
./pgsql.yml -l pg-meta  # init pgsql module on node 10.10.10.10

Declare Cluster

To create a three-node HA postgres cluster with streaming replication:

pg-test:
  hosts:
    10.10.10.11: { pg_seq: 1, pg_role: primary }
    10.10.10.12: { pg_seq: 2, pg_role: replica }
    10.10.10.13: { pg_seq: 3, pg_role: replica }
  vars:
    pg_cluster: pg-test

And apply with:

./pgsql.yml -l pg-test  # init pg-test cluster

Declare Cluster Internal

You can deep customize a database cluster:

pg-meta:
  hosts:
    10.10.10.10: { pg_seq: 1, pg_role: primary }
  vars:
    pg_cluster: pg-meta
    pg_databases:
      - name: meta
        baseline: cmdb.sql
        comment: pigsty meta database
        schemas: [pigsty]
        extensions:
          - { name: adminpack, schema: pg_catalog }
          - { name: postgis, schema: public }
          - { name: timescaledb, schema: public }
    pg_users:
      - { name: dbuser_meta, password: DBUser.Meta, pgbouncer: true, roles: [dbrole_admin], comment: pigsty admin user }
      - { name: dbuser_view, password: DBUser.Viewer, pgbouncer: true, roles: [dbrole_readonly], comment: pigsty read-only user }
    pg_services:
      - { name: primary, port: 5433, dest: default }
      - { name: replica, port: 5434, dest: default, selector: "[]" }
      - { name: default, port: 5436, dest: postgres }
      - { name: offline, port: 5438, dest: postgres, selector: "[]" }
    pg_hba_rules:
      - { user: dbuser_view, db: all, addr: infra, auth: pwd, title: 'allow view user from infra nodes' }
    pgb_hba_rules:
      - { user: dbuser_view, db: all, addr: infra, auth: pwd, title: 'allow view user from infra nodes' }

Declare Access Control

Define advanced access control rules:

pg_hba_rules:
  - { user: '${dbsu}', db: all, addr: local, auth: ident, title: 'dbsu access via local os user ident' }
  - { user: '${dbsu}', db: replication, addr: local, auth: ident, title: 'dbsu replication from local os ident' }
  - { user: '${repl}', db: replication, addr: '${ip}/32', auth: pwd, title: 'replicator replication from ${ip}' }
  - { user: '${repl}', db: postgres, addr: '${ip}/32', auth: pwd, title: 'replicator postgres db from ${ip}' }
  - { user: '${monitor}', db: all, addr: '${ip}/32', auth: pwd, title: 'monitor from ${ip}' }
  - { user: '${monitor}', db: all, addr: infra, auth: pwd, title: 'monitor from infra nodes' }
  - { user: '${admin}', db: all, addr: infra, auth: ssl, title: 'admin @ infra nodes with pwd & ssl' }
  - { user: '+dbrole_readonly', db: all, addr: '${vip}/32', auth: ssl, title: 'allow readonly role from ${vip} with ssl' }
  - { user: '+dbrole_offline', db: all, addr: '${vip}/32', auth: ssl, title: 'allow offline role from ${vip} with ssl' }
  - { user: dbuser_meta, db: meta, addr: '10.0.0.0/8', auth: ssl, title: 'allow meta user from 10.0.0.0/8 with ssl' }

Citus Distributive Cluster

Declare a horizontally distributed Citus cluster:

pg-citus0: # coordinator
  hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
  vars:
    pg_cluster: pg-citus0
    pg_mode: citus
    pg_shard: pg-citus
    pg_primary_db: meta
    pg_users: [ { name: dbuser_meta, password: DBUser.Meta, pgbouncer: true, roles: [ dbrole_admin ] } ]
    pg_databases: [ { name: meta, extensions: [ { name: citus }, { name: postgis }, { name: timescaledb } ] } ]
    pg_hba_rules:
      - { user: 'all', db: all, addr: '10.10.10.0/24', auth: trust }
pg-citus1: # worker1
  hosts: { 10.10.10.11: { pg_seq: 1, pg_role: primary } }
  vars: { pg_cluster: pg-citus1, pg_mode: citus, pg_shard: pg-citus }
pg-citus2: # worker2
  hosts: { 10.10.10.12: { pg_seq: 1, pg_role: primary } }
  vars: { pg_cluster: pg-citus2, pg_mode: citus, pg_shard: pg-citus }
pg-citus3: # worker3
  hosts: { 10.10.10.13: { pg_seq: 1, pg_role: primary } }
  vars: { pg_cluster: pg-citus3, pg_mode: citus, pg_shard: pg-citus }

Redis Clusters

Declare different types of Redis clusters:

redis-ms: # redis classic primary-replica
  hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { }, 6380: { replica_of: '10.10.10.10 6379' } } } }
  vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' }
redis-sentinel: # redis sentinel x3
  hosts:
    10.10.10.10: { redis_node: 1, redis_instances: { 26379: { sentinel_monitor: redis-src } } }
    10.10.10.11: { redis_node: 2, redis_instances: { 26379: { sentinel_monitor: redis-src } } }
    10.10.10.12: { redis_node: 3, redis_instances: { 26379: { sentinel_monitor: redis-src } } }
  vars: { redis_cluster: redis-sentinel, redis_password: 'redis.sentinel' }
redis-cluster: # native redis cluster: 3m x 3s
  hosts:
    10.10.10.10: { redis_node: 1 ,redis_instances: { 6379: { }, 6380: { } } }
    10.10.10.11: { redis_node: 2 ,redis_instances: { 6379: { }, 6380: { } } }
    10.10.10.12: { redis_node: 3 ,redis_instances: { 6379: { }, 6380: { } } }
  vars: { redis_cluster: redis-cluster, redis_password: 'redis.cluster', redis_mode: cluster, redis_max_memory: 64MB }

Etcd Cluster

Declare a 3-node etcd consensus cluster:

etcd:
  hosts:
    10.10.10.10: { etcd_seq: 1 }
    10.10.10.11: { etcd_seq: 2 }
    10.10.10.12: { etcd_seq: 3 }
  vars:
    etcd_cluster: etcd
    etcd_safeguard: false
    etcd_clean: true

MinIO Cluster

Declare a 3-node MinIO object storage cluster:

minio:
  hosts:
    10.10.10.10: { minio_seq: 1 }
    10.10.10.11: { minio_seq: 2 }
    10.10.10.12: { minio_seq: 3 }
  vars:
    minio_cluster: minio
    minio_data: '/data/minio'
    minio_domain: sss.pigsty
    minio_buckets: [ { name: pgsql }, { name: infra }, { name: redis } ]
    minio_users:
      - { access_key: dba, secret_key: S3User.DBA, policy: consoleAdmin }
      - { access_key: pgbackrest, secret_key: S3User.Backup, policy: readwrite }

Pigsty enables you to describe your entire infrastructure declaratively and manage it through code, providing consistency, repeatability, and scalability for your database and infrastructure operations.