Skip to content

Commit

Permalink
cnp demo: update kubernetes & cnp
Browse files Browse the repository at this point in the history
Switch to k3d for updated Kubernetes version
Update output to match CNP 1.10
  • Loading branch information
josh-heyer committed Nov 12, 2021
1 parent 136d22e commit 714f26e
Show file tree
Hide file tree
Showing 2 changed files with 208 additions and 124 deletions.
166 changes: 104 additions & 62 deletions advocacy_docs/kubernetes/cloud_native_postgresql/interactive_demo.mdx
Original file line number Diff line number Diff line change
@@ -1,17 +1,18 @@
---
title: "Installation, Configuration and Deployment Demo"
description: "Walk through the process of installing, configuring and deploying the Cloud Native PostgreSQL Operator via a browser-hosted Minikube console"
description: "Walk through the process of installing, configuring and deploying the Cloud Native PostgreSQL Operator via a browser-hosted Kubernetes environment"
navTitle: Install, Configure, Deploy
product: 'Cloud Native Operator'
platform: ubuntu
tags:
- postgresql
- cloud-native-postgresql-operator
- kubernetes
- minikube
- k3d
- live-demo
katacodaPanel:
scenario: minikube
scenario: ubuntu:2004
initializeCommand: clear; echo -e \\\\033[1mPreparing k3d and kubectl...\\\\n\\\\033[0m; snap install kubectl --classic; wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash; clear; echo -e \\\\033[2mk3d is ready\\ - enjoy Kubernetes\\!\\\\033[0m;
codelanguages: shell, yaml
showInteractiveBadge: true
---
Expand All @@ -30,22 +31,34 @@ It will take roughly 5-10 minutes to work through.

<KatacodaPanel />

[Minikube](https://kubernetes.io/docs/setup/learning-environment/minikube/) is already installed in this environment; we just need to start the cluster:
Once [k3d](https://k3d.io/) is ready, we need to start a cluster:

```shell
minikube start
k3d cluster create
__OUTPUT__
* minikube v1.8.1 on Ubuntu 18.04
* Using the none driver based on user configuration
* Running on localhost (CPUs=2, Memory=2460MB, Disk=145651MB) ...
* OS release is Ubuntu 18.04.4 LTS
* Preparing Kubernetes v1.17.3 on Docker 19.03.6 ...
- kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
* Launching Kubernetes ...
* Enabling addons: default-storageclass, storage-provisioner
* Configuring local host environment ...
* Waiting for cluster to come online ...
* Done! kubectl is now configured to use "minikube"
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-k3s-default'
INFO[0000] Created volume 'k3d-k3s-default-images'
INFO[0000] Starting new tools node...
INFO[0000] Pulling image 'docker.io/rancher/k3d-tools:5.1.0'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
INFO[0001] Pulling image 'docker.io/rancher/k3s:v1.21.5-k3s2'
INFO[0002] Starting Node 'k3d-k3s-default-tools'
INFO[0006] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0007] Pulling image 'docker.io/rancher/k3d-proxy:5.1.0'
INFO[0011] Using the k3d-tools node to gather environment information
INFO[0011] HostIP: using network gateway...
INFO[0011] Starting cluster 'k3s-default'
INFO[0011] Starting servers...
INFO[0011] Starting Node 'k3d-k3s-default-server-0'
INFO[0018] Starting agents...
INFO[0018] Starting helpers...
INFO[0018] Starting Node 'k3d-k3s-default-serverlb'
INFO[0024] Injecting '172.19.0.1 host.k3d.internal' into /etc/hosts of all nodes...
INFO[0024] Injecting records for host.k3d.internal and for 2 network members into CoreDNS configmap...
INFO[0025] Cluster 'k3s-default' created successfully!
INFO[0025] You can now use it like this:
kubectl cluster-info
```

This will create the Kubernetes cluster, and you will be ready to use it.
Expand All @@ -54,29 +67,30 @@ Verify that it works with the following command:
```shell
kubectl get nodes
__OUTPUT__
NAME STATUS ROLES AGE VERSION
minikube Ready master 66s v1.17.3
NAME STATUS ROLES AGE VERSION
k3d-k3s-default-server-0 Ready control-plane,master 16s v1.21.5+k3s2
```

You will see one node called `minikube`. If the status isn't yet "Ready", wait for a few seconds and run the command above again.
You will see one node called `k3d-k3s-default-server-0`. If the status isn't yet "Ready", wait for a few seconds and run the command above again.

## Install Cloud Native PostgreSQL

Now that the Minikube cluster is running, you can proceed with Cloud Native PostgreSQL installation as described in the ["Installation"](installation_upgrade.md) section:
Now that the Kubernetes cluster is running, you can proceed with Cloud Native PostgreSQL installation as described in the ["Installation and upgrades"](installation_upgrade.md) section:

```shell
kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.9.1.yaml
kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.10.0.yaml
__OUTPUT__
namespace/postgresql-operator-system created
customresourcedefinition.apiextensions.k8s.io/backups.postgresql.k8s.enterprisedb.io created
customresourcedefinition.apiextensions.k8s.io/clusters.postgresql.k8s.enterprisedb.io created
customresourcedefinition.apiextensions.k8s.io/poolers.postgresql.k8s.enterprisedb.io created
customresourcedefinition.apiextensions.k8s.io/scheduledbackups.postgresql.k8s.enterprisedb.io created
mutatingwebhookconfiguration.admissionregistration.k8s.io/postgresql-operator-mutating-webhook-configuration created
serviceaccount/postgresql-operator-manager created
clusterrole.rbac.authorization.k8s.io/postgresql-operator-manager created
clusterrolebinding.rbac.authorization.k8s.io/postgresql-operator-manager-rolebinding created
service/postgresql-operator-webhook-service created
deployment.apps/postgresql-operator-controller-manager created
mutatingwebhookconfiguration.admissionregistration.k8s.io/postgresql-operator-mutating-webhook-configuration created
validatingwebhookconfiguration.admissionregistration.k8s.io/postgresql-operator-validating-webhook-configuration created
```

Expand Down Expand Up @@ -166,24 +180,32 @@ metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"postgresql.k8s.enterprisedb.io/v1","kind":"Cluster","metadata":{"annotations":{},"name":"cluster-example","namespace":"default"},"spec":{"instances":3,"primaryUpdateStrategy":"unsupervised","storage":{"size":"1Gi"}}}
creationTimestamp: "2021-09-30T19:52:07Z"
creationTimestamp: "2021-11-12T05:56:37Z"
generation: 1
name: cluster-example
namespace: default
resourceVersion: "2292"
selfLink: /apis/postgresql.k8s.enterprisedb.io/v1/namespaces/default/clusters/cluster-example
uid: af696791-b82a-45a9-a1c2-6e4577128d0e
resourceVersion: "2005"
uid: 621d46bc-8a3b-4039-a9f3-6f21ab4ef68d
spec:
affinity:
podAntiAffinityType: preferred
topologyKey: ""
bootstrap:
initdb:
database: app
encoding: UTF8
localeCType: C
localeCollate: C
owner: app
imageName: quay.io/enterprisedb/postgresql:14.0
enableSuperuserAccess: true
imageName: quay.io/enterprisedb/postgresql:14.1
imagePullPolicy: IfNotPresent
instances: 3
logLevel: info
maxSyncReplicas: 0
minSyncReplicas: 0
postgresGID: 26
postgresUID: 26
postgresql:
parameters:
log_destination: csvlog
Expand All @@ -200,15 +222,18 @@ spec:
wal_keep_size: 512MB
primaryUpdateStrategy: unsupervised
resources: {}
startDelay: 30
stopDelay: 30
storage:
resizeInUseVolumes: true
size: 1Gi
status:
certificates:
clientCASecret: cluster-example-ca
expirations:
cluster-example-ca: 2021-12-29 19:47:07 +0000 UTC
cluster-example-replication: 2021-12-29 19:47:07 +0000 UTC
cluster-example-server: 2021-12-29 19:47:07 +0000 UTC
cluster-example-ca: 2022-02-10 05:51:37 +0000 UTC
cluster-example-replication: 2022-02-10 05:51:37 +0000 UTC
cluster-example-server: 2022-02-10 05:51:37 +0000 UTC
replicationTLSSecret: cluster-example-replication
serverAltDNSNames:
- cluster-example-rw
Expand All @@ -222,9 +247,11 @@ status:
- cluster-example-ro.default.svc
serverCASecret: cluster-example-ca
serverTLSSecret: cluster-example-server
cloudNativePostgresqlCommitHash: c88bd8a
cloudNativePostgresqlCommitHash: f616a0d
cloudNativePostgresqlOperatorHash: 02abbad9215f5118906c0c91d61bfbdb33278939861d2e8ea21978ce48f37421
configMapResourceVersion: {}
currentPrimary: cluster-example-1
currentPrimaryTimestamp: "2021-11-12T05:57:15Z"
healthyPVC:
- cluster-example-1
- cluster-example-2
Expand All @@ -239,22 +266,25 @@ status:
licenseStatus:
isImplicit: true
isTrial: true
licenseExpiration: "2021-10-30T19:52:07Z"
licenseExpiration: "2021-12-12T05:56:37Z"
licenseStatus: Implicit trial license
repositoryAccess: false
valid: true
phase: Cluster in healthy state
poolerIntegrations:
pgBouncerIntegration: {}
pvcCount: 3
readService: cluster-example-r
readyInstances: 3
secretsResourceVersion:
applicationSecretVersion: "884"
clientCaSecretVersion: "880"
replicationSecretVersion: "882"
serverCaSecretVersion: "880"
serverSecretVersion: "881"
superuserSecretVersion: "883"
applicationSecretVersion: "934"
clientCaSecretVersion: "930"
replicationSecretVersion: "932"
serverCaSecretVersion: "930"
serverSecretVersion: "931"
superuserSecretVersion: "933"
targetPrimary: cluster-example-1
targetPrimaryTimestamp: "2021-11-12T05:56:38Z"
writeService: cluster-example-rw
```

Expand All @@ -273,15 +303,15 @@ status:

## Install the kubectl-cnp plugin

Cloud Native PostgreSQL provides a plugin for kubectl to manage a cluster in Kubernetes, along with a script to install it:
Cloud Native PostgreSQL provides [a plugin for kubectl](cnp-plugin) to manage a cluster in Kubernetes, along with a script to install it:

```shell
curl -sSfL \
https://github.com/EnterpriseDB/kubectl-cnp/raw/main/install.sh | \
sudo sh -s -- -b /usr/local/bin
__OUTPUT__
EnterpriseDB/kubectl-cnp info checking GitHub for latest tag
EnterpriseDB/kubectl-cnp info found version: 1.9.1 for v1.9.1/linux/x86_64
EnterpriseDB/kubectl-cnp info found version: 1.10.0 for v1.10.0/linux/x86_64
EnterpriseDB/kubectl-cnp info installed /usr/local/bin/kubectl-cnp
```

Expand All @@ -293,17 +323,22 @@ __OUTPUT__
Cluster in healthy state
Name: cluster-example
Namespace: default
PostgreSQL Image: quay.io/enterprisedb/postgresql:14.0
PostgreSQL Image: quay.io/enterprisedb/postgresql:14.1
Primary instance: cluster-example-1
Instances: 3
Ready instances: 3
Current Timeline: 1
Current WAL file: 000000010000000000000005

Continuous Backup status
Not configured

Instances status
Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart Status
-------- ----------- ------------ ---------- --------- ------- ----------- ------------- --------------- ------
cluster-example-1 0/5000060 7013817246676054032 ✓ ✗ ✗ ✗ OK
cluster-example-2 0/5000060 0/5000060 7013817246676054032 ✗ ✓ ✗ ✗ OK
cluster-example-3 0/5000060 0/5000060 7013817246676054032 ✗ ✓ ✗ ✗ OK
Manager Version Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart Status
--------------- -------- ----------- ------------ ---------- --------- ------- ----------- ------------- --------------- ------
1.10.0 cluster-example-1 0/5000060 7029558504442904594 ✓ ✗ ✗ ✗ OK
1.10.0 cluster-example-2 0/5000060 0/5000060 7029558504442904594 ✗ ✓ ✗ ✗ OK
1.10.0 cluster-example-3 0/5000060 0/5000060 7029558504442904594 ✗ ✓ ✗ ✗ OK
```

!!! Note "There's more"
Expand All @@ -326,20 +361,24 @@ Now if we check the status...
kubectl cnp status cluster-example
__OUTPUT__
Failing over Failing over to cluster-example-2
Switchover in progress
Name: cluster-example
Namespace: default
PostgreSQL Image: quay.io/enterprisedb/postgresql:14.0
Primary instance: cluster-example-1 (switching to cluster-example-2)
PostgreSQL Image: quay.io/enterprisedb/postgresql:14.1
Primary instance: cluster-example-2
Instances: 3
Ready instances: 2
Current Timeline: 2
Current WAL file: 000000020000000000000006

Continuous Backup status
Not configured

Instances status
Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart Status
-------- ----------- ------------ ---------- --------- ------- ----------- ------------- --------------- ------
cluster-example-1 - - - - - - - - pod not available
cluster-example-2 0/60010F0 7013817246676054032 ✓ OK
cluster-example-3 0/60000A0 0/60000A0 7013817246676054032 ✗ ✗ ✗ ✗ OK
Manager Version Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart Status
--------------- -------- ----------- ------------ ---------- --------- ------- ----------- ------------- --------------- ------
1.10.0 cluster-example-3 0/60000A0 0/60000A0 7029558504442904594 ✗ OK
45 cluster-example-1 - - - - - - - - pod not available
1.10.0 cluster-example-2 0/6000F58 7029558504442904594 ✓ ✗ ✗ ✗ OK
```

...the failover process has begun, with the second pod promoted to primary. Once the failed pod has restarted, it will become a replica of the new primary:
Expand All @@ -350,26 +389,29 @@ __OUTPUT__
Cluster in healthy state
Name: cluster-example
Namespace: default
PostgreSQL Image: quay.io/enterprisedb/postgresql:14.0
PostgreSQL Image: quay.io/enterprisedb/postgresql:14.1
Primary instance: cluster-example-2
Instances: 3
Ready instances: 3
Current Timeline: 2
Current WAL file: 000000020000000000000006

Continuous Backup status
Not configured

Instances status
Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart Status
-------- ----------- ------------ ---------- --------- ------- ----------- ------------- --------------- ------
cluster-example-1 0/6004E70 0/6004E70 7013817246676054032 ✗ ✗ OK
cluster-example-2 0/6004E70 7013817246676054032 ✓ ✗ ✗ ✗ OK
cluster-example-3 0/6004E70 0/6004E70 7013817246676054032 ✗ ✓ ✗ ✗ OK
Manager Version Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart Status
--------------- -------- ----------- ------------ ---------- --------- ------- ----------- ------------- --------------- ------
1.10.0 cluster-example-3 0/60000A0 0/60000A0 7029558504442904594 ✗ ✗ OK
1.10.0 cluster-example-2 0/6004CA0 7029558504442904594 ✓ ✗ ✗ ✗ OK
1.10.0 cluster-example-1 0/6004CA0 0/6004CA0 7029558504442904594 ✗ ✓ ✗ ✗ OK
```


### Further reading

This is all it takes to get a PostgreSQL cluster up and running, but of course there's a lot more possible - and certainly much more that is prudent before you should ever deploy in a production environment!

- Deploying on public cloud platforms: see the [Cloud Setup](cloud_setup/) section.

- Design goals and possibilities offered by the Cloud Native PostgreSQL Operator: check out the [Architecture](architecture/) and [Use cases](use_cases/) sections.

- Configuring a secure and reliable system: read through the [Security](security/), [Failure Modes](failure_modes/) and [Backup and Recovery](backup_recovery/) sections.
Expand Down
Loading

0 comments on commit 714f26e

Please sign in to comment.