Skip to content

Commit

Permalink
Merge pull request #2249 from EnterpriseDB/release/2022-01-24
Browse files Browse the repository at this point in the history
Release: 2022-01-24
  • Loading branch information
drothery-edb authored Jan 24, 2022
2 parents 7dcebf2 + 520a31c commit a270c41
Show file tree
Hide file tree
Showing 3 changed files with 209 additions and 151 deletions.
177 changes: 103 additions & 74 deletions advocacy_docs/kubernetes/cloud_native_postgresql/interactive_demo.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -40,24 +40,24 @@ INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-k3s-default'
INFO[0000] Created volume 'k3d-k3s-default-images'
INFO[0000] Starting new tools node...
INFO[0000] Pulling image 'docker.io/rancher/k3d-tools:5.1.0'
INFO[0000] Pulling image 'docker.io/rancher/k3d-tools:5.2.2'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
INFO[0001] Pulling image 'docker.io/rancher/k3s:v1.21.5-k3s2'
INFO[0002] Pulling image 'docker.io/rancher/k3s:v1.21.7-k3s1'
INFO[0002] Starting Node 'k3d-k3s-default-tools'
INFO[0006] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0007] Pulling image 'docker.io/rancher/k3d-proxy:5.1.0'
INFO[0011] Using the k3d-tools node to gather environment information
INFO[0011] HostIP: using network gateway...
INFO[0011] Starting cluster 'k3s-default'
INFO[0011] Starting servers...
INFO[0011] Starting Node 'k3d-k3s-default-server-0'
INFO[0018] Starting agents...
INFO[0018] Starting helpers...
INFO[0018] Starting Node 'k3d-k3s-default-serverlb'
INFO[0024] Injecting '172.19.0.1 host.k3d.internal' into /etc/hosts of all nodes...
INFO[0024] Injecting records for host.k3d.internal and for 2 network members into CoreDNS configmap...
INFO[0025] Cluster 'k3s-default' created successfully!
INFO[0025] You can now use it like this:
INFO[0010] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0011] Pulling image 'docker.io/rancher/k3d-proxy:5.2.2'
INFO[0015] Using the k3d-tools node to gather environment information
INFO[0015] HostIP: using network gateway 172.19.0.1 address
INFO[0015] Starting cluster 'k3s-default'
INFO[0015] Starting servers...
INFO[0015] Starting Node 'k3d-k3s-default-server-0'
INFO[0023] All agents already running.
INFO[0023] Starting helpers...
INFO[0024] Starting Node 'k3d-k3s-default-serverlb'
INFO[0031] Injecting '172.19.0.1 host.k3d.internal' into /etc/hosts of all nodes...
INFO[0031] Injecting records for host.k3d.internal and for 2 network members into CoreDNS configmap...
INFO[0032] Cluster 'k3s-default' created successfully!
INFO[0032] You can now use it like this:
kubectl cluster-info
```

Expand All @@ -68,7 +68,7 @@ Verify that it works with the following command:
kubectl get nodes
__OUTPUT__
NAME STATUS ROLES AGE VERSION
k3d-k3s-default-server-0 Ready control-plane,master 16s v1.21.5+k3s2
k3d-k3s-default-server-0 Ready control-plane,master 44s v1.21.7+k3s1
```

You will see one node called `k3d-k3s-default-server-0`. If the status isn't yet "Ready", wait for a few seconds and run the command above again.
Expand All @@ -78,7 +78,7 @@ You will see one node called `k3d-k3s-default-server-0`. If the status isn't yet
Now that the Kubernetes cluster is running, you can proceed with Cloud Native PostgreSQL installation as described in the ["Installation and upgrades"](installation_upgrade.md) section:

```shell
kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.10.0.yaml
kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.12.0.yaml
__OUTPUT__
namespace/postgresql-operator-system created
customresourcedefinition.apiextensions.k8s.io/backups.postgresql.k8s.enterprisedb.io created
Expand All @@ -88,6 +88,7 @@ customresourcedefinition.apiextensions.k8s.io/scheduledbackups.postgresql.k8s.en
serviceaccount/postgresql-operator-manager created
clusterrole.rbac.authorization.k8s.io/postgresql-operator-manager created
clusterrolebinding.rbac.authorization.k8s.io/postgresql-operator-manager-rolebinding created
configmap/postgresql-operator-default-monitoring created
service/postgresql-operator-webhook-service created
deployment.apps/postgresql-operator-controller-manager created
mutatingwebhookconfiguration.admissionregistration.k8s.io/postgresql-operator-mutating-webhook-configuration created
Expand Down Expand Up @@ -153,8 +154,8 @@ immediately after applying the cluster configuration you'll see the status as `I
```shell
kubectl get pods
__OUTPUT__
NAME READY STATUS RESTARTS AGE
cluster-example-1-initdb-kq2vw 0/1 PodInitializing 0 18s
NAME READY STATUS RESTARTS AGE
cluster-example-1-initdb-ftcsq 0/1 Pending 0 5s
```

...give it a minute, and then check on it again:
Expand All @@ -180,12 +181,12 @@ metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"postgresql.k8s.enterprisedb.io/v1","kind":"Cluster","metadata":{"annotations":{},"name":"cluster-example","namespace":"default"},"spec":{"instances":3,"primaryUpdateStrategy":"unsupervised","storage":{"size":"1Gi"}}}
creationTimestamp: "2021-11-12T05:56:37Z"
creationTimestamp: "2022-01-21T21:04:10Z"
generation: 1
name: cluster-example
namespace: default
resourceVersion: "2005"
uid: 621d46bc-8a3b-4039-a9f3-6f21ab4ef68d
resourceVersion: "2007"
uid: 345a0c57-d1bd-44ca-8a7c-0f5c7a85145f
spec:
affinity:
podAntiAffinityType: preferred
Expand All @@ -204,10 +205,18 @@ spec:
logLevel: info
maxSyncReplicas: 0
minSyncReplicas: 0
monitoring:
customQueriesConfigMap:
- key: queries
name: postgresql-operator-default-monitoring
disableDefaultQueries: false
enablePodMonitor: false
postgresGID: 26
postgresUID: 26
postgresql:
parameters:
archive_mode: "on"
dynamic_shared_memory_type: posix
log_destination: csvlog
log_directory: /controller/log
log_filename: postgres
Expand All @@ -218,22 +227,26 @@ spec:
max_parallel_workers: "32"
max_replication_slots: "32"
max_worker_processes: "32"
shared_memory_type: mmap
shared_preload_libraries: ""
wal_keep_size: 512MB
wal_receiver_timeout: 5s
wal_sender_timeout: 5s
primaryUpdateStrategy: unsupervised
resources: {}
startDelay: 30
stopDelay: 30
storage:
resizeInUseVolumes: true
size: 1Gi
switchoverDelay: 40000000
status:
certificates:
clientCASecret: cluster-example-ca
expirations:
cluster-example-ca: 2022-02-10 05:51:37 +0000 UTC
cluster-example-replication: 2022-02-10 05:51:37 +0000 UTC
cluster-example-server: 2022-02-10 05:51:37 +0000 UTC
cluster-example-ca: 2022-04-21 20:59:10 +0000 UTC
cluster-example-replication: 2022-04-21 20:59:10 +0000 UTC
cluster-example-server: 2022-04-21 20:59:10 +0000 UTC
replicationTLSSecret: cluster-example-replication
serverAltDNSNames:
- cluster-example-rw
Expand All @@ -247,11 +260,13 @@ status:
- cluster-example-ro.default.svc
serverCASecret: cluster-example-ca
serverTLSSecret: cluster-example-server
cloudNativePostgresqlCommitHash: f616a0d
cloudNativePostgresqlOperatorHash: 02abbad9215f5118906c0c91d61bfbdb33278939861d2e8ea21978ce48f37421
configMapResourceVersion: {}
cloudNativePostgresqlCommitHash: 332a1581
cloudNativePostgresqlOperatorHash: 673702e097eaf6b473c582ef5271ca076c4e1460eaaea9c9cdc3ec62e6093b3d
configMapResourceVersion:
metrics:
postgresql-operator-default-monitoring: "797"
currentPrimary: cluster-example-1
currentPrimaryTimestamp: "2021-11-12T05:57:15Z"
currentPrimaryTimestamp: "2022-01-21T21:04:55Z"
healthyPVC:
- cluster-example-1
- cluster-example-2
Expand All @@ -266,7 +281,7 @@ status:
licenseStatus:
isImplicit: true
isTrial: true
licenseExpiration: "2021-12-12T05:56:37Z"
licenseExpiration: "2022-02-20T21:04:10Z"
licenseStatus: Implicit trial license
repositoryAccess: false
valid: true
Expand All @@ -277,14 +292,14 @@ status:
readService: cluster-example-r
readyInstances: 3
secretsResourceVersion:
applicationSecretVersion: "934"
clientCaSecretVersion: "930"
replicationSecretVersion: "932"
serverCaSecretVersion: "930"
serverSecretVersion: "931"
superuserSecretVersion: "933"
applicationSecretVersion: "762"
clientCaSecretVersion: "756"
replicationSecretVersion: "760"
serverCaSecretVersion: "756"
serverSecretVersion: "758"
superuserSecretVersion: "761"
targetPrimary: cluster-example-1
targetPrimaryTimestamp: "2021-11-12T05:56:38Z"
targetPrimaryTimestamp: "2022-01-21T21:04:11Z"
writeService: cluster-example-rw
```

Expand All @@ -311,7 +326,7 @@ curl -sSfL \
sudo sh -s -- -b /usr/local/bin
__OUTPUT__
EnterpriseDB/kubectl-cnp info checking GitHub for latest tag
EnterpriseDB/kubectl-cnp info found version: 1.10.0 for v1.10.0/linux/x86_64
EnterpriseDB/kubectl-cnp info found version: 1.12.0 for v1.12.0/linux/x86_64
EnterpriseDB/kubectl-cnp info installed /usr/local/bin/kubectl-cnp
```

Expand All @@ -321,24 +336,30 @@ The `cnp` command is now available in kubectl:
kubectl cnp status cluster-example
__OUTPUT__
Cluster in healthy state
Name: cluster-example
Namespace: default
PostgreSQL Image: quay.io/enterprisedb/postgresql:14.1
Primary instance: cluster-example-1
Instances: 3
Ready instances: 3
Current Timeline: 1
Current WAL file: 000000010000000000000005
Name: cluster-example
Namespace: default
System ID: 7055768364208304145
PostgreSQL Image: quay.io/enterprisedb/postgresql:14.1
Primary instance: cluster-example-1
Instances: 3
Ready instances: 3
Current Write LSN: 0/5000060 (Timeline: 1 - WAL File: 000000010000000000000005)

Continuous Backup status
Not configured

Streaming Replication status
Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority
---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- -------------
cluster-example-2 0/5000060 0/5000060 0/5000060 0/5000060 00:00:00 00:00:00 00:00:00 streaming async 0
cluster-example-3 0/5000060 0/5000060 0/5000060 0/5000060 00:00:00 00:00:00 00:00:00 streaming async 0

Instances status
Manager Version Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart Status
--------------- -------- ----------- ------------ ---------- --------- ------- ----------- ------------- --------------- ------
1.10.0 cluster-example-1 0/5000060 7029558504442904594 ✓ ✗ ✗ ✗ OK
1.10.0 cluster-example-2 0/5000060 0/5000060 7029558504442904594 ✗ ✓ ✗ ✗ OK
1.10.0 cluster-example-3 0/5000060 0/5000060 7029558504442904594 ✗ ✓ ✗ ✗ OK
Name Database Size Current LSN Replication role Status QoS Manager Version
---- ------------- ----------- ---------------- ------ --- ---------------
cluster-example-1 33 MB 0/5000060 Primary OK BestEffort 1.12.0
cluster-example-2 33 MB 0/5000060 Standby (async) OK BestEffort 1.12.0
cluster-example-3 33 MB 0/5000060 Standby (async) OK BestEffort 1.12.0
```

!!! Note "There's more"
Expand All @@ -360,25 +381,27 @@ Now if we check the status...
```shell
kubectl cnp status cluster-example
__OUTPUT__
Failing over Failing over to cluster-example-2
Failing over Failing over from cluster-example-1 to cluster-example-2
Switchover in progress
Name: cluster-example
Namespace: default
PostgreSQL Image: quay.io/enterprisedb/postgresql:14.1
Primary instance: cluster-example-2
Primary instance: cluster-example-1 (switching to cluster-example-2)
Instances: 3
Ready instances: 2
Current Timeline: 2
Current WAL file: 000000020000000000000006

Continuous Backup status
Not configured

Streaming Replication status
Primary instance not found

Instances status
Manager Version Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart Status
--------------- -------- ----------- ------------ ---------- --------- ------- ----------- ------------- --------------- ------
1.10.0 cluster-example-3 0/60000A0 0/60000A0 7029558504442904594 ✗ ✗ ✗ OK
45 cluster-example-1 - - - - - - - - pod not available
1.10.0 cluster-example-2 0/6000F58 7029558504442904594 ✓ ✗ ✗ OK
Name Database Size Current LSN Replication role Status QoS Manager Version
---- ------------- ----------- ---------------- ------ --- ---------------
cluster-example-2 33 MB 0/60000A0 Standby (file based) OK BestEffort 1.12.0
cluster-example-3 33 MB 0/60000A0 Standby (file based) OK BestEffort 1.12.0
cluster-example-1 - - - pod not available BestEffort -
```

...the failover process has begun, with the second pod promoted to primary. Once the failed pod has restarted, it will become a replica of the new primary:
Expand All @@ -387,24 +410,30 @@ Manager Version Pod name Current LSN Received LSN Replay LSN Syste
kubectl cnp status cluster-example
__OUTPUT__
Cluster in healthy state
Name: cluster-example
Namespace: default
PostgreSQL Image: quay.io/enterprisedb/postgresql:14.1
Primary instance: cluster-example-2
Instances: 3
Ready instances: 3
Current Timeline: 2
Current WAL file: 000000020000000000000006
Name: cluster-example
Namespace: default
System ID: 7055768364208304145
PostgreSQL Image: quay.io/enterprisedb/postgresql:14.1
Primary instance: cluster-example-2
Instances: 3
Ready instances: 3
Current Write LSN: 0/6004CD8 (Timeline: 2 - WAL File: 000000020000000000000006)

Continuous Backup status
Not configured

Streaming Replication status
Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority
---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- -------------
cluster-example-1 0/6004CD8 0/6004CD8 0/6004CD8 0/6004CD8 00:00:00 00:00:00 00:00:00 streaming async 0
cluster-example-3 0/6004CD8 0/6004CD8 0/6004CD8 0/6004CD8 00:00:00 00:00:00 00:00:00 streaming async 0

Instances status
Manager Version Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart Status
--------------- -------- ----------- ------------ ---------- --------- ------- ----------- ------------- --------------- ------
1.10.0 cluster-example-3 0/60000A0 0/60000A0 7029558504442904594 ✗ ✗ ✗ ✗ OK
1.10.0 cluster-example-2 0/6004CA0 7029558504442904594 ✓ ✗ ✗ ✗ OK
1.10.0 cluster-example-1 0/6004CA0 0/6004CA0 7029558504442904594 ✗ ✓ ✗ ✗ OK
Name Database Size Current LSN Replication role Status QoS Manager Version
---- ------------- ----------- ---------------- ------ --- ---------------
cluster-example-3 33 MB 0/6004CD8 Standby (async) OK BestEffort 1.12.0
cluster-example-2 33 MB 0/6004CD8 Primary OK BestEffort 1.12.0
cluster-example-1 33 MB 0/6004CD8 Standby (async) OK BestEffort 1.12.0
```


Expand Down
Loading

0 comments on commit a270c41

Please sign in to comment.