PIGSTY

Multi-Node

how to install pigsty on multiple nodes

There is a configuration Tutorial for expanding Pigsty from one node to multiple nodes. While the easiest way is always pre-planing everything and provisioning them in one-pass.


1-node Setup

We already illustrated the 1-node installation in the Quick Start section, which may be the simplest setup.

IDIP AddressNODEPGSQLINFRAETCD
110.10.10.10metapg-meta-1infra-1etcd-1

It is not advised to put all the eggs in one basket, but even this one-node setup can be used for production, as long as an external MinIO / S3 / NFS... is configured for PG as remote backup repo.

There are lots of 1-node config templates for your reference.


2-node Setup

The semi-HA setup

A two-node setup enables database replication and semi-HA capabilities:

IDIP AddressNODEPGSQLINFRAETCD
110.10.10.10metapg-meta-1infra-1etcd-1
210.10.10.11node-1pg-meta-2

While more robust than a single node setup, HA has limitations:

  • No automatic failover if node-1 fails - manual promotion of node-2 required
  • Automatic failover works if node-2 fails - node-1 gets promoted automatically

This "semi-HA" setup can only auto-recover from specific node failures.

You can use the dual.yml config template, and provision required VM with: Vagrant dual.rb to provion this environment.


3-node Setup

The true HA

A true HA setup that can automatically recover from any single node failure:

IDIP AddressNODEPGSQLINFRAETCD
110.10.10.10node-1pg-meta-1infra-1etcd-1
210.10.10.11node-1pg-meta-1infra-2etcd-2
310.10.10.12node-2pg-meta-1infra-3etcd-3

You can use the trio.yml config template, and provision required VM with: Vagrant trio.rb to provion this environment.


4-node Sandbox

This is a sandbox demo environment used in Pigsty, which has one infra node and three extra data nodes:

IDIP AddressNODEPGSQLINFRAETCDMINIO
110.10.10.10metapg-meta-1infra-1etcd-1minio-1
210.10.10.11node-1pg-test-1
310.10.10.12node-2pg-test-1
410.10.10.13node-3pg-test-1

You can use the full.yml config template, and provision required VM with: Vagrant full.rb or Terraform full.tf to provion this environment.


5-node Building

This pro.yml is a 5-node building enviroinment which contains supported Linux distros.

IDIP AddressNODEPGSQLINFRAETCD
110.10.10.8el8el8-1infra-1
210.10.10.9el9el9-1infra-2etcd-2
310.10.10.12u12u12-1infra-3
410.10.10.22u22u22-1infra-4
510.10.10.24u24u24-1infra-5

You can use the Vagrant pro.rb or Terraform pro.tf to provion this environment.


36-node Simulation

A production simulation environment (simu.yml) with 36 nodes covering all Pigsty components

IP AddressSPECNODEPGSQLINFRAETCDMINIOREDIS
10.10.10.108C32Gmeta1pg-meta-1infra-1
10.10.10.118C32Gmeta2pg-meta-2infra-2
10.10.10.122C4Gpg12pg-v12-1
10.10.10.132C4Gpg13pg-v13-1
10.10.10.142C4Gpg14pg-v14-1
10.10.10.152C4Gpg15pg-v15-1
10.10.10.162C4Gpg16pg-v16-1
10.10.10.172C4Gpg17pg-v17-1
10.10.10.182C4Gproxy1
10.10.10.192C4Gproxy2
10.10.10.212C4Gminio1etcd-1minio-1redis-meta-1
10.10.10.222C4Gminio2etcd-2minio-2redis-meta-2
10.10.10.232C4Gminio3etcd-3minio-3redis-meta-3
10.10.10.242C4Gminio4etcd-4minio-4redis-meta-4
10.10.10.252C4Gminio5etcd-5minio-5redis-meta-5
10.10.10.401C2Gnode40pg-pitr-1
10.10.10.411C2Gnode41pg-test-1redis-test-1
10.10.10.421C2Gnode42pg-test-2redis-test-2
10.10.10.431C2Gnode43pg-test-3redis-test-3
10.10.10.441C2Gnode44pg-test-4redis-test-4
10.10.10.451C2Gnode45pg-src-1redis-src-1
10.10.10.461C2Gnode46pg-src-2redis-src-2
10.10.10.471C2Gnode47pg-src-3redis-src-3
10.10.10.481C2Gnode48pg-dst-1redis-dst-1
10.10.10.491C2Gnode49pg-dst-2redis-dst-2
10.10.10.501C2Gnode50pg-citus0-1
10.10.10.511C2Gnode51pg-citus0-2
10.10.10.521C2Gnode52pg-citus1-1
10.10.10.531C2Gnode53pg-citus1-2
10.10.10.541C2Gnode54pg-citus2-1
10.10.10.551C2Gnode55pg-citus2-2
10.10.10.561C2Gnode56pg-citus3-1
10.10.10.571C2Gnode57pg-citus3-2
10.10.10.581C2Gnode58pg-citus4-1
10.10.10.591C2Gnode59pg-citus4-2
10.10.10.884C8Gtest

You can use the Vagrant simu.rb to provion this environment. You can run the entre simulation on a real server (72C / 256G) with libvirt as VM provider with vagrant.

  • 2 infra nodes, monitoring each other
  • 2 dedicate proxy nodes that run haproxy
  • 5-node etcd cluster which tolerates 2 node failures, and 5-node redis sentinel cluster
  • 5-node minio cluster with 4 disks on each node
  • 10 postgres clusters, pg13 - pg15, pg-src, pg-dst, pg-pitr, pg-test
  • 10-node citus cluster with 5 shareds
  • redis standalone cluster redis-src and redis-dst, and native cluster redis-test