PIGSTY

Hardware

Nodes, specs, disks, network, VIP, domain ...


Node

Pigsty currently runs on nodes with Linux kernel and x86_64 / aarch64 architecture.

A "node" refers to a resource that is SSH accessible and offers a bare Linux OS environment. It could be a physical machine, a virtual machine, or an OS-like container equipped with systemd, sudo and sshd.

It requires at least 1 node to deploy pigsty, You can prepare more and setup everything in one-pass, or add them later. The minimum node spec requirement is 1C1G, it is recommended to use at least 2C2G. Higher is better, with no upper limit. Parameters will be automatically tuned based on available resource.

Use multiple nodes for production deployment

A functioning HA setup requires at least 3 nodes to work, or use 2 for a semi-HA setup


Spec

How many nodes do you need? It depends on your resource and your requirements.

Single Node Setup

The simplest setup with everything running on a single node, with four essential modules installed:

IDNODEPGSQLINFRAETCD
1node-1pg-meta-1infra-1etcd-1

This setup can be used for production if external S3/MinIO is configured for backup/PITR.

Two Node Setup

A two-node setup enables database replication and semi-HA capabilities:

IDNODEPGSQLINFRAETCD
1node-1pg-meta-2 (replica)infra-1etcd-1
2node-2pg-meta-1 (primary)

While more robust than a single node setup, HA has limitations:

  • No automatic failover if node-1 fails - manual promotion of node-2 required
  • Automatic failover works if node-2 fails - node-1 gets promoted automatically

This "semi-HA" setup can only auto-recover from specific node failures.

Three Node Setup

A true HA setup that can automatically recover from any single node failure:

IDNODEPGSQLINFRAETCD
1node-1pg-meta-1infra-1etcd-1
2node-2pg-meta-2infra-2etcd-2
3node-3pg-meta-3infra-3etcd-3

Four Node Setup

The standard demonstration environment used by Pigsty's sandbox:

IDNODEPGSQLINFRAETCD
1node-1pg-meta-1infra-1etcd-1
2node-2pg-test-1etcd-2
3node-3pg-test-2etcd-3
4node-4pg-test-3

Disk

Pigsty will use /data as the default data directory, if you have a dedicated main data disk, it is recommended to mount it there, and use /data1, /data2, /dataN for extra disk drivers.

Mount disks elsewhere?

If you are mounting it elsewhere, you'll have to change the following parameters accordingly:

NameDescriptionDefault
node_datanode main data directory/data
pg_fs_mainpostgres main data directory/data
pg_fs_bkuppostgres backup data directory/data/backups
etcd_dataetcd data directory/data/etcd
prometheus_dataprometheus data directory/data/prometheus
loki_dataloki data directory/data/loki
minio_dataminio data directory/data/minio
redis_dataredis data directory/data/redis

Network

Pigsty requires static network to work, you should explicitly assign a fixed IPv4 address for each node.

Don't have a fixed IP?

The 127.0.0.1 could be used as a workaround in case of no fixed IP address in one-node installation.

The IP address will be used as the node's unique identifier, it should be the primary IP address bind to the primary network interface used for internal network communications.

Never use Public IP as identifier

Using public IP addresses as node identifiers can cause security and connectivity issues.

L2 VIP require L2 Networking

To use the optional Node VIP and PG VIP features, ensure all nodes are located within the same L2 network

Internet access is required when performing the standard (online) installation. But pigsty can be offline installed via offline package, which does not require Internet access in this case.


VIP

Pigsty supports optional L2 VIP for NODE clusters (keepalived) and PGSQL clusters (vip-manager).

To use L2 VIP features, You have to explicitly assign an L2 VIP for them. It's not a big deal when running on your own hardware, but may become an issue when working in a public cloud environment.


Domain

Pigsty using local static domain names for the following service with WebUI. You can assign your custom domain names to these services, or use real domain names. Just change them in the infra_portal.

DomainNamePortComponentDescription
h.pigstyhome80/443NginxDefault server, local repo
g.pigstygrafana3000GrafanaMonitoring & visualization
p.pigstyprometheus9090PrometheusTime series DB
a.pigstyalertmanager9093AlertManagerAlert aggregation & routing

Domain names are optional, to use them, it is user's responsibility to add the following records to your /etc/hosts file (local static resolution), Or add them to your DNS server / public DNS vendor.

10.10.10.10 h.pigsty g.pigsty p.pigsty a.pigsty