Inventory
The main config file for Pigsty
Each pigsty deployment has a corresponding config inventory.
It could be stored in a local config file in YAML format, or dynamically generated from CMDB or any ansible compatible format.
Pigsty uses one monolith YAML config file by default, which is pigsty.yml
, located in the pigsty home directory.
The configure
script will generate the pigsty.yml
file scaffold with good defaults according to your env and input,
but it's OPTIONAL: you can always edit the pigsty.yml
file directly as the tutorial shows.
Structure
The inventory consists of two parts: global vars & multiple groups. You can define new clusters in all.children
.
And describe infra with global vars: all.vars
. Which may look like this:
all: # Top-level object: all
vars: {...} # Global Parameters
children: # Group Definitions
infra: # Group Definition: 'infra'
hosts: {...} # Group Membership: 'infra'
vars: {...} # Group Parameters: 'infra'
etcd: {...} # Group Definition: 'etcd'
pg-meta: {...} # Group Definition: 'pg-meta'
pg-test: {...} # Group Definition: 'pg-test'
redis-test: {...} # Group Definition: 'redis-test'
# ...
There are plenty of examples under conf/
, which can also be used as templates during configure
.
Cluster
Each ansible group may represent a cluster, which could be a Node cluster, PostgreSQL cluster, Redis cluster, Etcd cluster, or Minio cluster, etc…
Cluster definition consists of two parts: hosts & vars.
You can define cluster members in <cls>.hosts
and describe cluster with parameters in <cls>.vars
.
Here's an example of a 3-node HA PG cluster:
all:
children: # All Groups
pg-test: # Group Name
hosts: # Group Host (Cluster Membership)
10.10.10.11: { pg_seq: 1, pg_role: primary } # Host1
10.10.10.12: { pg_seq: 2, pg_role: replica } # Host2
10.10.10.13: { pg_seq: 3, pg_role: offline } # Host3
vars: # Group Vars (Cluster Parameters)
pg_cluster: pg-test
The vars
in cluster level will override the global vars, and vars
in host level will override the cluster vars and global vars.
Parameter
Parameters are key-value pairs that define all entities in the deployment. The key is a string name, and the value can be one of five types: boolean, string, number, array, or object.
And parameters can be set at different levels with the following precedence:
Level | Location | Description | Precedence |
---|---|---|---|
CLI Args | Command Line | via -e cli param arg | Highest (5) |
Host Vars | <group>.hosts.<host> | Parameters specific to a single host | High (4) |
Group Vars | <group>.vars | Parameters shared by hosts in a group/cluster | Medium (3) |
Global Vars | all.vars | Parameters shared by all hosts | Low (2) |
Defaults | <roles>/default/main.yml | Role implementation default values | Lowest (1) |
Here are some examples about parameter precedence:
- Force removing existing databases with Playbook CLI Args
-e pg_clean=true
- Override pg instance role with Instance Level Parameter
pg_role
on Host Vars - Override pg cluster name with Cluster Level Parameter
pg_cluster
on Group Vars. - Specify global NTP servers with Global Parameter
node_ntp_servers
on Global Vars - If no
pg_version
is set, pigsty will use the default value from role implementation (17
by default)
Every parameter has a proper default value except for mandatory IDENTITY PARAMETERS; they are used as identifiers and must be set explicitly.
Such as pg_cluster
, pg_role
, and pg_seq
in above snippet.
Available parameters vary according to the modules:
Reference
Pigsty have 280+ parameters, check module parameters for details.
Module | Section | Description | Count |
---|---|---|---|
INFRA | META | Pigsty Metadata | 4 |
INFRA | CA | Self-Signed CA | 3 |
INFRA | INFRA_ID | Infra Portals & Identity | 2 |
INFRA | REPO | Local Software Repo | 9 |
INFRA | INFRA_PACKAGE | Infra Packages | 2 |
INFRA | NGINX | Nginx Web Server | 7 |
INFRA | DNS | DNSMASQ Nameserver | 3 |
INFRA | PROMETHEUS | Prometheus Stack | 18 |
INFRA | GRAFANA | Grafana Stack | 6 |
INFRA | LOKI | Loki Logging Service | 4 |
NODE | NODE_ID | Node Identity Parameters | 5 |
NODE | NODE_DNS | Node domain names & resolver | 6 |
NODE | NODE_PACKAGE | Node Repo & Packages | 5 |
NODE | NODE_TUNE | Node Tuning & Kernel features | 10 |
NODE | NODE_ADMIN | Admin User & Credentials | 7 |
NODE | NODE_TIME | Node Timezone, NTP, Crontabs | 5 |
NODE | NODE_VIP | Node Keepalived L2 VIP | 8 |
NODE | HAPROXY | HAProxy the load balancer | 10 |
NODE | NODE_EXPORTER | Node Monitoring Agent | 3 |
NODE | PROMTAIL | Promtail logging Agent | 4 |
DOCKER | DOCKER | Docker Daemon | 4 |
ETCD | ETCD | ETCD DCS Cluster | 10 |
MINIO | MINIO | MINIO S3 Object Storage | 15 |
REDIS | REDIS | Redis the key-value NoSQL cache | 20 |
PGSQL | PG_ID | PG Identity Parameters | 11 |
PGSQL | PG_BUSINESS | PG Business Object Definition | 12 |
PGSQL | PG_INSTALL | Install PG Packages & Extensions | 10 |
PGSQL | PG_BOOTSTRAP | Init HA PG Cluster with Patroni | 39 |
PGSQL | PG_PROVISION | Create in-database objects | 9 |
PGSQL | PG_BACKUP | Set Backup Repo with pgBackRest | 5 |
PGSQL | PG_SERVICE | Exposing service, bind vip, dns | 9 |
PGSQL | PG_EXPORTER | PG Monitor agent for Prometheus | 15 |