Skip to content

Commit

Permalink
Merge branch 'release/2021-04-28' into main
Browse files Browse the repository at this point in the history
Former-commit-id: 1be9c9f
  • Loading branch information
josh-heyer committed Apr 28, 2021
2 parents fc0cfb0 + 5c24cbf commit 7d04710
Show file tree
Hide file tree
Showing 88 changed files with 3,860 additions and 449 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -79,3 +79,4 @@ product_docs/content/
product_docs/content_build/
static/nginx_redirects.generated
temp_kubernetes/
advocacy_docs/kubernetes/cloud_native_postgresql/*.md.in
531 changes: 288 additions & 243 deletions advocacy_docs/kubernetes/cloud_native_postgresql/api_reference.mdx

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ kubectl create secret generic minio-creds \
--from-literal=MINIO_SECRET_KEY=<minio secret key here>
```

!!! NOTE "Note"
!!! Note
Cloud Object Storage credentials will be used only by MinIO Gateway in this case.

!!! Important
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ To get a certificate, you need to provide a name for the secret to store
the credentials, the cluster name, and a user for this certificate

```shell
kubectl cnp certificate cluster-cert --cnp-cluster cluster-example --cnp-user appuser
kubectl cnp certificate cluster-cert --cnp-cluster cluster-example --cnp-user appuser
```

After the secrete it's created, you can get it using `kubectl`
Expand Down
1 change: 1 addition & 0 deletions advocacy_docs/kubernetes/cloud_native_postgresql/e2e.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ and the following suite of E2E tests are performed on that cluster:
* Restore from backup;
* Pod affinity using `NodeSelector`;
* Metrics collection;
* Operator pod deletion;
* Primary endpoint switch in case of failover in less than 10 seconds;
* Primary endpoint switch in case of switchover in less than 20 seconds;
* Recover from a degraded state in less than 60 seconds.
Expand Down
5 changes: 4 additions & 1 deletion advocacy_docs/kubernetes/cloud_native_postgresql/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,9 @@ navigation:
- rolling_update
- backup_recovery
- postgresql_conf
- operator_conf
- storage
- labels_annotations
- samples
- monitoring
- expose_pg_services
Expand All @@ -36,6 +38,7 @@ navigation:
- container_images
- operator_capability_levels
- api_reference
- release_notes
- credits

---
Expand Down Expand Up @@ -64,7 +67,7 @@ and is available under the [EnterpriseDB Limited Use License](https://www.enterp
You can [evaluate Cloud Native PostgreSQL for free](evaluation.md).
You need a valid license key to use Cloud Native PostgreSQL in production.

!!! IMPORTANT
!!! Important
Currently, based on the [Operator Capability Levels model](operator_capability_levels.md),
users can expect a **"Level III - Full Lifecycle"** set of capabilities from the
Cloud Native PostgreSQL Operator.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,12 @@ product: 'Cloud Native Operator'
The operator can be installed like any other resource in Kubernetes,
through a YAML manifest applied via `kubectl`.

You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.2.1.yaml)
You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.3.0.yaml)
as follows:

```sh
kubectl apply -f \
https://get.enterprisedb.io/cnp/postgresql-operator-1.2.1.yaml
https://get.enterprisedb.io/cnp/postgresql-operator-1.3.0.yaml
```

Once you have run the `kubectl` command, Cloud Native PostgreSQL will be installed in your Kubernetes cluster.
Expand Down Expand Up @@ -92,3 +92,9 @@ the pod will be rescheduled on another node.

As far as OpenShift is concerned, details might differ depending on the
selected installation method.

!!! Seealso "Operator configuration"
You can change the default behavior of the operator by overriding
some default options. For more information, please refer to the
["Operator configuration"](operator_conf.md) section.

Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: "Installation, Configuration and Demployment Demo"
title: "Installation, Configuration and Deployment Demo"
description: "Walk through the process of installing, configuring and deploying the Cloud Native PostgreSQL Operator via a browser-hosted Minikube console"
navTitle: Install, Configure, Deploy
product: 'Cloud Native PostgreSQL Operator'
Expand All @@ -21,6 +21,7 @@ Want to see what it takes to get the Cloud Native PostgreSQL Operator up and run
1. Installing the Cloud Native PostgreSQL Operator
2. Deploying a three-node PostgreSQL cluster
3. Installing and using the kubectl-cnp plugin
4. Testing failover to verify the resilience of the cluster

It will take roughly 5-10 minutes to work through.

Expand Down Expand Up @@ -64,7 +65,7 @@ You will see one node called `minikube`. If the status isn't yet "Ready", wait f
Now that the Minikube cluster is running, you can proceed with Cloud Native PostgreSQL installation as described in the ["Installation"](installation.md) section:

```shell
kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.2.0.yaml
kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.3.0.yaml
__OUTPUT__
namespace/postgresql-operator-system created
customresourcedefinition.apiextensions.k8s.io/backups.postgresql.k8s.enterprisedb.io created
Expand Down Expand Up @@ -164,13 +165,13 @@ metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"postgresql.k8s.enterprisedb.io/v1","kind":"Cluster","metadata":{"annotations":{},"name":"cluster-example","namespace":"default"},"spec":{"instances":3,"primaryUpdateStrategy":"unsupervised","storage":{"size":"1Gi"}}}
creationTimestamp: "2021-04-07T00:33:43Z"
creationTimestamp: "2021-04-27T15:11:21Z"
generation: 1
name: cluster-example
namespace: default
resourceVersion: "1806"
resourceVersion: "2572"
selfLink: /apis/postgresql.k8s.enterprisedb.io/v1/namespaces/default/clusters/cluster-example
uid: 38ddc347-3f2e-412a-aa14-a26904e1a49e
uid: 6a693046-a9d0-41b0-ac68-7a96d7e2ff07
spec:
affinity:
topologyKey: ""
Expand All @@ -196,21 +197,27 @@ status:
instances: 3
instancesStatus:
healthy:
- cluster-example-3
- cluster-example-1
- cluster-example-2
- cluster-example-3
latestGeneratedNode: 3
licenseStatus:
isImplicit: true
isTrial: true
licenseExpiration: "2021-05-07T00:33:43Z"
licenseExpiration: "2021-05-27T15:11:21Z"
licenseStatus: Implicit trial license
repositoryAccess: false
valid: true
phase: Cluster in healthy state
pvcCount: 3
readService: cluster-example-r
readyInstances: 3
secretsResourceVersion:
applicationSecretVersion: "1479"
caSecretVersion: "1475"
replicationSecretVersion: "1477"
serverSecretVersion: "1476"
superuserSecretVersion: "1478"
targetPrimary: cluster-example-1
writeService: cluster-example-rw
```
Expand Down Expand Up @@ -238,7 +245,7 @@ curl -sSfL \
sudo sh -s -- -b /usr/local/bin
__OUTPUT__
EnterpriseDB/kubectl-cnp info checking GitHub for latest tag
EnterpriseDB/kubectl-cnp info found version: 1.2.1 for v1.2.1/linux/x86_64
EnterpriseDB/kubectl-cnp info found version: 1.3.0 for v1.3.0/linux/x86_64
EnterpriseDB/kubectl-cnp info installed /usr/local/bin/kubectl-cnp
```

Expand All @@ -247,7 +254,7 @@ The `cnp` command is now available in kubectl:
```shell
kubectl cnp status cluster-example
__OUTPUT__
Cluster in healthy state
Cluster in healthy state
Name: cluster-example
Namespace: default
PostgreSQL Image: quay.io/enterprisedb/postgresql:13.2
Expand All @@ -256,23 +263,81 @@ Instances: 3
Ready instances: 3

Instances status
Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart
-------- ----------- ------------ ---------- --------- ------- ----------- ------------- ---------------
cluster-example-1 0/6000060 6941211174657425425 ✓ ✗ ✗ ✗
cluster-example-2 0/6000060 0/6000060 6941211174657425425 ✗ ✓ ✗ ✗
cluster-example-3 0/6000060 0/6000060 6941211174657425425 ✗ ✓ ✗ ✗
Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart Status
-------- ----------- ------------ ---------- --------- ------- ----------- ------------- --------------- ------
cluster-example-1 0/5000060 6955855494195015697 ✓ ✗ ✗ ✗ OK
cluster-example-2 0/5000060 0/5000060 6955855494195015697 ✗ ✓ ✗ ✗ OK
cluster-example-3 0/5000060 0/5000060 6955855494195015697 ✗ ✓ ✗ ✗ OK
```

!!! Note "There's more"
See [the Cloud Native PostgreSQL Plugin page](cnp-plugin/) for more commands and options.

## Testing failover

As our status checks show, we're running two replicas - if something happens to the primary instance of PostgreSQL, the cluster will fail over to one of them. Let's demonstrate this by killing the primary pod:

```shell
kubectl delete pod --wait=false cluster-example-1
__OUTPUT__
pod "cluster-example-1" deleted
```

This simulates a hard shutdown of the server - a scenario where something has gone wrong.

Now if we check the status...
```shell
kubectl cnp status cluster-example
__OUTPUT__
Failing over Failing over to cluster-example-2
Name: cluster-example
Namespace: default
PostgreSQL Image: quay.io/enterprisedb/postgresql:13.2
Primary instance: cluster-example-2
Instances: 3
Ready instances: 2

Instances status
Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart Status
-------- ----------- ------------ ---------- --------- ------- ----------- ------------- --------------- ------
cluster-example-1 - - - - - - - - unable to upgrade connection: container not found ("postgres") -
cluster-example-2 0/7000230 6955855494195015697 ✓ ✗ ✗ ✗ OK
cluster-example-3 0/70000A0 0/70000A0 6955855494195015697 ✗ ✓ ✗ ✗ OK
```

...the failover process has begun, with the second pod promoted to primary. Once the failed pod has restarted, it will become a replica of the new primary:

```shell
kubectl cnp status cluster-example
__OUTPUT__
Cluster in healthy state
Name: cluster-example
Namespace: default
PostgreSQL Image: quay.io/enterprisedb/postgresql:13.2
Primary instance: cluster-example-2
Instances: 3
Ready instances: 3

Instances status
Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart Status
-------- ----------- ------------ ---------- --------- ------- ----------- ------------- --------------- ------
cluster-example-1 0/7004268 0/7004268 6955855494195015697 ✗ ✓ ✗ ✗ OK
cluster-example-2 0/7004268 6955855494195015697 ✓ ✗ ✗ ✗ OK
cluster-example-3 0/7004268 0/7004268 6955855494195015697 ✗ ✓ ✗ ✗ OK
```


### Further reading

This is all it takes to get a PostgreSQL cluster up and running, but of course there's a lot more possible - and certainly much more that is prudent before you should ever deploy in a production environment!

- For information on using the Cloud Native PostgreSQL Operator to deploy on public cloud platforms, see the [Cloud Setup](cloud_setup/) section.
- Deploying on public cloud platforms: see the [Cloud Setup](cloud_setup/) section.

- Design goals and possibilities offered by the Cloud Native PostgreSQL Operator: check out the [Architecture](architecture/) and [Use cases](use_cases/) sections.

- Configuring a secure and reliable system: read through the [Security](security/), [Failure Modes](failure_modes/) and [Backup and Recovery](backup_recovery/) sections.

- Webinar: [Watch Gabriele Bartolini discuss and demonstrate Cloud Native PostgreSQL lifecycle management](https://www.youtube.com/watch?v=S-I9y-HnAnI)

- For the design goals and possibilities offered by the Cloud Native PostgreSQL Operator, check out the [Architecture](architecture/) and [Use cases](use_cases/) sections.
- Development: [Leonardo Cecchi writes about setting up a local environment using Cloud Native PostgreSQL for application development](https://www.enterprisedb.com/blog/cloud-native-postgresql-application-developers)

- And for details on what it takes to configure a secure and reliable system, read through the [Security](security/), [Failure Modes](failure_modes/) and [Backup and Recovery](backup_recovery/) sections.
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
---
title: 'Labels and annotations'
originalFilePath: 'src/labels_annotations.md'
product: 'Cloud Native Operator'
---

Resources in Kubernetes are organized in a flat structure, with no hierarchical
information or relationship between them. However, such resources and objects
can be linked together and put in relationship through **labels** and
**annotations**.

!!! info
For more information, please refer to the Kubernetes documentation on
[annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) and
[labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/).

In short:

- an annotation is used to assign additional non-identifying information to
resources with the goal to facilitate integration with external tools
- a label is used to group objects and query them through Kubernetes' native
selector capability

You can select one or more labels and/or annotations you will use
in your Cloud Native PostgreSQL deployments. Then you need to configure the operator
so that when you define these labels and/or annotations in a cluster's metadata,
they are automatically inherited by all resources created by it (including pods).

!!! Note
Label and annotation inheritance is the technique adopted by Cloud Native
PostgreSQL in lieu of alternative approaches such as pod templates.

## Pre-requisites

By default, no label or annotation defined in the cluster's metadata is
inherited by the associated resources.
In order to enable label/annotation inheritance, you need to follow the
instructions provided in the ["Operator configuration"](operator_conf.md) section.

Below we will continue on that example and limit it to the following:

- annotations: `categories`
- labels: `app`, `environment`, and `workload`

!!! Note
Feel free to select the names that most suit your context for both
annotations and labels. Remember that you can also use wildcards
in naming and adopt strategies like `mycompany/*` for all labels
or annotations starting with `mycompany/` to be inherited.

## Defining cluster's metadata

When defining the cluster, **before** any resource is deployed, you can
properly set the metadata as follows:

```yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Cluster
metadata:
name: cluster-example
annotations:
categories: database
labels:
environment: production
workload: database
app: sso
spec:
# ... <snip>
```

Once the cluster is deployed, you can verify, for example, that the labels
have been correctly set in the pods with:

```shell
kubectl get pods --show-labels
```

## Current limitations

Cloud Native PostgreSQL does not currently support synchronization of labels
or annotations after a resource has been created. For example, suppose you
deploy a cluster. When you add a new annotation to be inherited and define it
in the existing cluster, the operator will not automatically set it
on the associated resources.
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,9 @@ kubectl rollout restart deployment -n [NAMESPACE_NAME_HERE] \
postgresql-operator-controller-manager
```

!!! Seealso "Operator configuration"
For more information, please refer to the ["Operator configuration"](operator_conf.md) section.

The validity of the license key can be checked inside the cluster status.

```sh
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,15 @@ product: 'Cloud Native Operator'
---

For each PostgreSQL instance, the operator provides an exporter of metrics for
[Prometheus](https://prometheus.io/) via HTTP, on port 8000.
[Prometheus](https://prometheus.io/) via HTTP, on port 9187.
The operator comes with a predefined set of metrics, as well as a highly
configurable and customizable system to define additional queries via one or
more `ConfigMap` objects - and, future versions, `Secret` too.

The exporter can be accessed as follows:

```shell
curl http://<pod ip>:8000/metrics
curl http://<pod ip>:9187/metrics
```

All monitoring queries are:
Expand Down
Loading

0 comments on commit 7d04710

Please sign in to comment.