Hardware
Nodes, specs, disks, network, VIP, domain ...
Node
Pigsty currently runs on nodes with Linux
kernel and x86_64
/ aarch64
architecture.
A "node" refers to a resource that is SSH accessible and offers a bare Linux OS environment.
It could be a physical machine, a virtual machine, or an OS-like container equipped with systemd
, sudo
and sshd
.
It requires at least 1 node to deploy pigsty,
You can prepare more and setup everything in one-pass, or add them later.
The minimum node spec requirement is 1C1G
, it is recommended to use at least 2C2G
.
Higher is better, with no upper limit. Parameters will be automatically tuned based on available resource.
Use multiple nodes for production deployment
A functioning HA setup requires at least 3 nodes to work, or use 2 for a semi-HA setup
Spec
How many nodes do you need? It depends on your resource and your requirements.
Two Node Setup
A two-node setup enables database replication and semi-HA capabilities:
While more robust than a single node setup, HA has limitations:
- No automatic failover if
node-1
fails - manual promotion ofnode-2
required - Automatic failover works if
node-2
fails -node-1
gets promoted automatically
This "semi-HA" setup can only auto-recover from specific node failures.
Disk
Pigsty will use /data
as the default data directory, if you have a dedicated main data disk, it is recommended to mount it there,
and use /data1
, /data2
, /dataN
for extra disk drivers.
Mount disks elsewhere?
If you are mounting it elsewhere, you'll have to change the following parameters accordingly:
Name | Description | Default |
---|---|---|
node_data | node main data directory | /data |
pg_fs_main | postgres main data directory | /data |
pg_fs_bkup | postgres backup data directory | /data/backups |
etcd_data | etcd data directory | /data/etcd |
prometheus_data | prometheus data directory | /data/prometheus |
loki_data | loki data directory | /data/loki |
minio_data | minio data directory | /data/minio |
redis_data | redis data directory | /data/redis |
Network
Pigsty requires static network to work, you should explicitly assign a fixed IPv4 address for each node.
Don't have a fixed IP?
The 127.0.0.1
could be used as a workaround in case of no fixed IP address in one-node installation.
The IP address will be used as the node's unique identifier, it should be the primary IP address bind to the primary network interface used for internal network communications.
Never use Public IP as identifier
Using public IP addresses as node identifiers can cause security and connectivity issues.
L2 VIP require L2 Networking
To use the optional Node VIP and PG VIP features, ensure all nodes are located within the same L2 network
Internet access is required when performing the standard (online) installation. But pigsty can be offline installed via offline package, which does not require Internet access in this case.
VIP
Pigsty supports optional L2 VIP for NODE clusters (keepalived
) and PGSQL clusters (vip-manager
).
To use L2 VIP features, You have to explicitly assign an L2 VIP for them. It's not a big deal when running on your own hardware, but may become an issue when working in a public cloud environment.
Domain
Pigsty using local static domain names for the following service with WebUI.
You can assign your custom domain names to these services, or use real domain names.
Just change them in the infra_portal
.
Domain | Name | Port | Component | Description |
---|---|---|---|---|
h.pigsty | home | 80/443 | Nginx | Default server, local repo |
g.pigsty | grafana | 3000 | Grafana | Monitoring & visualization |
p.pigsty | prometheus | 9090 | Prometheus | Time series DB |
a.pigsty | alertmanager | 9093 | AlertManager | Alert aggregation & routing |
Domain names are optional, to use them, it is user's responsibility to add the following records to your /etc/hosts
file (local static resolution),
Or add them to your DNS server / public DNS vendor.
10.10.10.10 h.pigsty g.pigsty p.pigsty a.pigsty