diff --git a/docs/examples/ferretdb/monitoring/builtin-prom-fr.yaml b/docs/examples/ferretdb/monitoring/builtin-prom-fr.yaml new file mode 100644 index 0000000000..df84bc129a --- /dev/null +++ b/docs/examples/ferretdb/monitoring/builtin-prom-fr.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1alpha2 +kind: FerretDB +metadata: + name: builtin-prom-fr + namespace: demo +spec: + version: "1.18.0" + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 500Mi + backend: + externallyManaged: false + deletionPolicy: WipeOut + replicas: 2 + monitor: + agent: prometheus.io/builtin \ No newline at end of file diff --git a/docs/examples/ferretdb/monitoring/coreos-prom-fr.yaml b/docs/examples/ferretdb/monitoring/coreos-prom-fr.yaml index 0be5b868b0..f122b73343 100644 --- a/docs/examples/ferretdb/monitoring/coreos-prom-fr.yaml +++ b/docs/examples/ferretdb/monitoring/coreos-prom-fr.yaml @@ -4,7 +4,7 @@ metadata: name: coreos-prom-fr namespace: demo spec: - version: "1.23.0" + version: "1.18.0" storage: accessModes: - ReadWriteOnce diff --git a/docs/examples/ferretdb/reconfigure-tls/ferretdb.yaml b/docs/examples/ferretdb/reconfigure-tls/ferretdb.yaml new file mode 100644 index 0000000000..526c65c1dc --- /dev/null +++ b/docs/examples/ferretdb/reconfigure-tls/ferretdb.yaml @@ -0,0 +1,16 @@ +apiVersion: kubedb.com/v1alpha2 +kind: FerretDB +metadata: + name: ferretdb + namespace: demo +spec: + version: "1.23.0" + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 500Mi + backend: + externallyManaged: false + replicas: 2 \ No newline at end of file diff --git a/docs/examples/ferretdb/reconfigure-tls/frops-add-tls.yaml b/docs/examples/ferretdb/reconfigure-tls/frops-add-tls.yaml new file mode 100644 index 0000000000..7af90dc77c --- /dev/null +++ b/docs/examples/ferretdb/reconfigure-tls/frops-add-tls.yaml @@ -0,0 +1,16 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: FerretDBOpsRequest +metadata: + name: frops-add-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: ferretdb + tls: + issuerRef: + name: ferretdb-ca-issuer + kind: Issuer + apiGroup: "cert-manager.io" + timeout: 5m + apply: IfReady \ No newline at end of file diff --git a/docs/examples/ferretdb/reconfigure-tls/frops-rotate.yaml b/docs/examples/ferretdb/reconfigure-tls/frops-rotate.yaml new file mode 100644 index 0000000000..dd0153c5a2 --- /dev/null +++ b/docs/examples/ferretdb/reconfigure-tls/frops-rotate.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: FerretDBOpsRequest +metadata: + name: frops-rotate + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: ferretdb + tls: + rotateCertificates: true \ No newline at end of file diff --git a/docs/examples/ferretdb/reconfigure-tls/issuer.yaml b/docs/examples/ferretdb/reconfigure-tls/issuer.yaml new file mode 100644 index 0000000000..21558c9037 --- /dev/null +++ b/docs/examples/ferretdb/reconfigure-tls/issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: ferretdb-ca-issuer + namespace: demo +spec: + ca: + secretName: ferretdb-ca \ No newline at end of file diff --git a/docs/guides/ferretdb/monitoring/using-builtin-prometheus.md b/docs/guides/ferretdb/monitoring/using-builtin-prometheus.md new file mode 100644 index 0000000000..c1e04d313e --- /dev/null +++ b/docs/guides/ferretdb/monitoring/using-builtin-prometheus.md @@ -0,0 +1,366 @@ +--- +title: Monitor FerretDB using Builtin Prometheus Discovery +menu: + docs_{{ .version }}: + identifier: fr-using-builtin-prometheus-monitoring + name: Builtin Prometheus + parent: fr-monitoring-ferretdb + weight: 20 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Monitoring FerretDB with builtin Prometheus + +This tutorial will show you how to monitor FerretDB database using builtin [Prometheus](https://github.com/prometheus/prometheus) scraper. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). + +- If you are not familiar with how to configure Prometheus to scrape metrics from various Kubernetes resources, please read the tutorial from [here](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/guides/ferretdb/monitoring/overview.md). + +- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy respective monitoring resources. We are going to deploy database in `demo` namespace. + + ```bash + $ kubectl create ns monitoring + namespace/monitoring created + + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/ferretdb](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/ferretdb) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy FerretDB with Monitoring Enabled + +At first, let's deploy a FerretDB with monitoring enabled. Below is the FerretDB object that we are going to create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: FerretDB +metadata: + name: builtin-prom-fr + namespace: demo +spec: + version: "1.18.0" + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 500Mi + backend: + externallyManaged: false + deletionPolicy: WipeOut + replicas: 2 + monitor: + agent: prometheus.io/builtin +``` + +Here, + +- `spec.monitor.agent: prometheus.io/builtin` specifies that we are going to monitor this server using builtin Prometheus scraper. + +Let's create the FerretDB crd we have shown above. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/ferretdb/monitoring/builtin-prom-fr.yaml +ferretdb.kubedb.com/builtin-prom-fr created +``` + +Now, wait for the database to go into `Running` state. + +```bash +$ kubectl get fr -n demo builtin-prom-fr +NAME NAMESPACE VERSION STATUS AGE +builtin-prom-fr demo 1.18.0 Ready 2m41s +``` + +KubeDB will create a separate stats service with name `{FerretDB crd name}-stats` for monitoring purpose. + +```bash +$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=builtin-prom-fr" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +builtin-prom-fr ClusterIP 10.96.10.31 27017/TCP 3m14s +builtin-prom-fr-stats ClusterIP 10.96.216.137 56790/TCP 3m14s +``` + +Here, `builtin-prom-fr-stats` service has been created for monitoring purpose. Let's describe the service. + +```bash +$ kubectl describe svc -n demo builtin-prom-fr-stats +Name: builtin-prom-fr-stats +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=builtin-prom-fr + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=ferretdbs.kubedb.com + kubedb.com/role=stats +Annotations: monitoring.appscode.com/agent: prometheus.io/builtin + prometheus.io/path: /debug/metrics + prometheus.io/port: 56790 + prometheus.io/scrape: true +Selector: app.kubernetes.io/instance=builtin-prom-fr,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=ferretdbs.kubedb.com +Type: ClusterIP +IP Family Policy: SingleStack +IP Families: IPv4 +IP: 10.96.216.137 +IPs: 10.96.216.137 +Port: metrics 56790/TCP +TargetPort: metrics/TCP +Endpoints: 10.244.0.56:8080,10.244.0.57:8080 +Session Affinity: None +Events: +``` + +You can see that the service contains following annotations. + +```bash +prometheus.io/path: /debug/metrics +prometheus.io/port: 56790 +prometheus.io/scrape: true +``` + +The Prometheus server will discover the service endpoint using these specifications and will scrape metrics from the exporter. + +## Configure Prometheus Server + +Now, we have to configure a Prometheus scraping job to scrape the metrics using this service. We are going to configure scraping job similar to this [kubernetes-service-endpoints](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin#kubernetes-service-endpoints) job that scrapes metrics from endpoints of a service. + +Let's configure a Prometheus scraping job to collect metrics from this service. + +```yaml +- job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +### Configure Existing Prometheus Server + +If you already have a Prometheus server running, you have to add above scraping job in the `ConfigMap` used to configure the Prometheus server. Then, you have to restart it for the updated configuration to take effect. + +>If you don't use a persistent volume for Prometheus storage, you will lose your previously scraped data on restart. + +### Deploy New Prometheus Server + +If you don't have any existing Prometheus server running, you have to deploy one. In this section, we are going to deploy a Prometheus server in `monitoring` namespace to collect metrics using this stats service. + +**Create ConfigMap:** + +At first, create a ConfigMap with the scraping configuration. Bellow, the YAML of ConfigMap that we are going to create in this tutorial. + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: prometheus-config + labels: + app: prometheus-demo + namespace: monitoring +data: + prometheus.yml: |- + global: + scrape_interval: 5s + evaluation_interval: 5s + scrape_configs: + - job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +Let's create above `ConfigMap`, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/monitoring/builtin-prometheus/prom-config.yaml +configmap/prometheus-config created +``` + +**Create RBAC:** + +If you are using an RBAC enabled cluster, you have to give necessary RBAC permissions for Prometheus. Let's create necessary RBAC stuffs for Prometheus, + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/rbac.yaml +clusterrole.rbac.authorization.k8s.io/prometheus created +serviceaccount/prometheus created +clusterrolebinding.rbac.authorization.k8s.io/prometheus created +``` + +>YAML for the RBAC resources created above can be found [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/rbac.yaml). + +**Deploy Prometheus:** + +Now, we are ready to deploy Prometheus server. We are going to use following [deployment](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/deployment.yaml) to deploy Prometheus server. + +Let's deploy the Prometheus server. + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/deployment.yaml +deployment.apps/prometheus created +``` + +### Verify Monitoring Metrics + +Prometheus server is listening to port `9090`. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +At first, let's check if the Prometheus pod is in `Running` state. + +```bash +$ kubectl get pod -n monitoring -l=app=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-d64b668fb-vkrfz 1/1 Running 0 21s +``` + +Now, run following command on a separate terminal to forward 9090 port of `prometheus-d64b668fb-vkrfz` pod, + +```bash +$ kubectl port-forward -n monitoring prometheus-d64b668fb-vkrfz 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see the endpoint of `builtin-prom-fr-stats` service as one of the targets. + +

+   +

+ +Check the labels marked with red rectangle. These labels confirm that the metrics are coming from `FerretDB` database `builtin-prom-fr` through stats service `builtin-prom-fr-stats`. + +Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run following commands + +```bash +kubectl delete -n demo fr/builtin-prom-fr + +kubectl delete -n monitoring deployment.apps/prometheus + +kubectl delete -n monitoring clusterrole.rbac.authorization.k8s.io/prometheus +kubectl delete -n monitoring serviceaccount/prometheus +kubectl delete -n monitoring clusterrolebinding.rbac.authorization.k8s.io/prometheus + +kubectl delete ns demo +kubectl delete ns monitoring +``` + +## Next Steps + + +- Monitor your FerretDB database with KubeDB using [out-of-the-box prometheus-Operator](/docs/guides/ferretdb/monitoring/using-prometheus-operator.md). +- Detail concepts of [FerretDB object](/docs/guides/ferretdb/concepts/ferretdb.md). +- Detail concepts of [FerretDBVersion object](/docs/guides/ferretdb/concepts/catalog.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/ferretdb/monitoring/using-prometheus-operator.md b/docs/guides/ferretdb/monitoring/using-prometheus-operator.md index 303a39920d..7007a211d0 100644 --- a/docs/guides/ferretdb/monitoring/using-prometheus-operator.md +++ b/docs/guides/ferretdb/monitoring/using-prometheus-operator.md @@ -174,7 +174,7 @@ metadata: name: coreos-prom-fr namespace: demo spec: - version: "1.23.0" + version: "1.18.0" storage: accessModes: - ReadWriteOnce @@ -212,7 +212,7 @@ Now, wait for the database to go into `Running` state. ```bash $ kubectl get fr -n demo coreos-prom-fr NAME NAMESPACE VERSION STATUS AGE -coreos-prom-fr demo 1.23.0 Ready 111s +coreos-prom-fr demo 1.18.0 Ready 111s ``` KubeDB will create a separate stats service with name `{FerretDB crd name}-stats` for monitoring purpose. @@ -340,7 +340,7 @@ Forwarding from [::1]:9090 -> 9090 Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see `metrics` endpoint of `coreos-prom-fr-stats` service as one of the targets.

-  Prometheus Target +  Prometheus Target

Check the `endpoint` and `service` labels marked by the red rectangles. It verifies that the target is our expected database. Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create a beautiful dashboard with collected metrics. diff --git a/docs/guides/ferretdb/reconfigure-tls/_index.md b/docs/guides/ferretdb/reconfigure-tls/_index.md new file mode 100644 index 0000000000..071cb2eb3a --- /dev/null +++ b/docs/guides/ferretdb/reconfigure-tls/_index.md @@ -0,0 +1,10 @@ +--- +title: Reconfigure FerretDB TLS/SSL +menu: + docs_{{ .version }}: + identifier: fr-reconfigure-tls + name: Reconfigure TLS/SSL + parent: fr-ferretdb-guides + weight: 46 +menu_name: docs_{{ .version }} +--- diff --git a/docs/guides/ferretdb/reconfigure-tls/overview.md b/docs/guides/ferretdb/reconfigure-tls/overview.md new file mode 100644 index 0000000000..d47fe6b2f4 --- /dev/null +++ b/docs/guides/ferretdb/reconfigure-tls/overview.md @@ -0,0 +1,54 @@ +--- +title: Reconfiguring TLS of FerretDB +menu: + docs_{{ .version }}: + identifier: fr-reconfigure-tls-overview + name: Overview + parent: fr-reconfigure-tls + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Reconfiguring TLS of FerretDB + +This guide will give an overview on how KubeDB Ops-manager operator reconfigures TLS configuration i.e. add TLS, remove TLS, update issuer/cluster issuer or Certificates and rotate the certificates of a `FerretDB`. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [FerretDB](/docs/guides/ferretdb/concepts/ferretdb.md) + - [FerretDBOpsRequest](/docs/guides/ferretdb/concepts/opsrequest.md) + +## How Reconfiguring FerretDB TLS Configuration Process Works + +The following diagram shows how KubeDB Ops-manager operator reconfigures TLS of a `FerretDB`. Open the image in a new tab to see the enlarged version. + +
+  Reconfiguring TLS process of FerretDB +
Fig: Reconfiguring TLS process of FerretDB
+
+ +The Reconfiguring FerretDB TLS process consists of the following steps: + +1. At first, a user creates a `FerretDB` Custom Resource Object (CRO). + +2. `KubeDB` Provisioner operator watches the `FerretDB` CRO. + +3. When the operator finds a `FerretDB` CR, it creates `PetSet` and related necessary stuff like secrets, services, etc. + +4. Then, in order to reconfigure the TLS configuration of the `FerretDB` the user creates a `FerretDBOpsRequest` CR with desired information. + +5. `KubeDB` Ops-manager operator watches the `FerretDBOpsRequest` CR. + +6. When it finds a `FerretDBOpsRequest` CR, it pauses the `FerretDB` object which is referred from the `FerretDBOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `FerretDB` object during the reconfiguring TLS process. + +7. Then the `KubeDB` Ops-manager operator will add, remove, update or rotate TLS configuration based on the Ops Request yaml. + +8. Then the `KubeDB` Ops-manager operator will restart all the Pods of the ferretdb so that they restart with the new TLS configuration defined in the `FerretDBOpsRequest` CR. + +9. After the successful reconfiguring of the `FerretDB` TLS, the `KubeDB` Ops-manager operator resumes the `FerretDB` object so that the `KubeDB` Provisioner operator resumes its usual operations. + +In the next docs, we are going to show a step-by-step guide on reconfiguring TLS configuration of a FerretDB using `FerretDBOpsRequest` CRD. \ No newline at end of file diff --git a/docs/guides/ferretdb/reconfigure-tls/reconfigure-tls.md b/docs/guides/ferretdb/reconfigure-tls/reconfigure-tls.md new file mode 100644 index 0000000000..ac83465570 --- /dev/null +++ b/docs/guides/ferretdb/reconfigure-tls/reconfigure-tls.md @@ -0,0 +1,1090 @@ +--- +title: Reconfigure FerretDB TLS/SSL Encryption +menu: + docs_{{ .version }}: + identifier: fr-reconfigure-tls-rs + name: Reconfigure FerretDB TLS/SSL Encryption + parent: fr-reconfigure-tls + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Reconfigure FerretDB TLS/SSL (Transport Encryption) + +KubeDB supports reconfigure i.e. add, remove, update and rotation of TLS/SSL certificates for existing FerretDB database via a FerretDBOpsRequest. This tutorial will show you how to use KubeDB to reconfigure TLS/SSL encryption. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.0.0 or later to your cluster to manage your SSL/TLS certificates. + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/ferretdb](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/ferretdb) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Add TLS to a FerretDB + +Here, We are going to create a FerretDB database without TLS and then reconfigure the ferretdb to use TLS. + +### Deploy FerretDB without TLS + +In this section, we are going to deploy a FerretDB without TLS. In the next few sections we will reconfigure TLS using `FerretDBOpsRequest` CRD. Below is the YAML of the `FerretDB` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: FerretDB +metadata: + name: ferretdb + namespace: demo +spec: + version: "1.23.0" + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 500Mi + backend: + externallyManaged: false + replicas: 2 +``` + +Let's create the `FerretDB` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/ferretdb/reconfigure-tls/ferretdb.yaml +ferretdb.kubedb.com/ferretdb created +``` + +Now, wait until `ferretdb` has status `Ready`. i.e, + +```bash +$ kubectl get fr -n demo +NAME NAMESPACE VERSION STATUS AGE +ferretdb demo 1.23.0 Ready 75s + +$ kubectl dba describe ferretdb ferretdb -n demo +Name: ferretdb +Namespace: demo +Labels: +Annotations: +API Version: kubedb.com/v1alpha2 +Kind: FerretDB +Metadata: + Creation Timestamp: 2024-10-17T11:04:08Z + Finalizers: + kubedb.com + Generation: 4 + Resource Version: 158199 + UID: 7da85335-bac0-4247-ad69-85a7c44831df +Spec: + Auth Secret: + Name: ferretdb-auth + Backend: + Externally Managed: false + Linked DB: ferretdb + Postgres Ref: + Name: ferretdb-pg-backend + Namespace: demo + Version: 13.13 + Deletion Policy: WipeOut + Health Checker: + Failure Threshold: 1 + Period Seconds: 10 + Timeout Seconds: 10 + Pod Template: + Controller: + Metadata: + Spec: + Containers: + Name: ferretdb + Resources: + Limits: + Memory: 1Gi + Requests: + Cpu: 500m + Memory: 1Gi + Security Context: + Allow Privilege Escalation: false + Capabilities: + Drop: + ALL + Run As Group: 1000 + Run As Non Root: true + Run As User: 1000 + Seccomp Profile: + Type: RuntimeDefault + Pod Placement Policy: + Name: default + Security Context: + Fs Group: 1000 + Replicas: 2 + Ssl Mode: disabled + Storage: + Access Modes: + ReadWriteOnce + Resources: + Requests: + Storage: 500Mi + Storage Type: Durable + Version: 1.23.0 +Status: + Conditions: + Last Transition Time: 2024-10-17T11:04:08Z + Message: The KubeDB operator has started the provisioning of FerretDB: demo/ferretdb + Observed Generation: 2 + Reason: DatabaseProvisioningStartedSuccessfully + Status: True + Type: ProvisioningStarted + Last Transition Time: 2024-10-17T11:05:04Z + Message: All replicas are ready for FerretDB demo/ferretdb + Observed Generation: 4 + Reason: AllReplicasReady + Status: True + Type: ReplicaReady + Last Transition Time: 2024-10-17T11:05:14Z + Message: The FerretDB: demo/ferretdb is accepting client requests. + Observed Generation: 4 + Reason: DatabaseAcceptingConnectionRequest + Status: True + Type: AcceptingConnection + Last Transition Time: 2024-10-17T11:05:14Z + Message: The FerretDB: demo/ferretdb is ready. + Observed Generation: 4 + Reason: ReadinessCheckSucceeded + Status: True + Type: Ready + Last Transition Time: 2024-10-17T11:05:14Z + Message: The FerretDB: demo/ferretdb is successfully provisioned. + Observed Generation: 4 + Reason: DatabaseSuccessfullyProvisioned + Status: True + Type: Provisioned + Phase: Ready +Events: +``` + +### Create Issuer/ ClusterIssuer + +Now, We are going to create an example `Issuer` that will be used to enable SSL/TLS in FerretDB. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. + +- Start off by generating a ca certificates using openssl. + +```bash +$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=ca/O=kubedb" +Generating a RSA private key +................+++++ +........................+++++ +writing new private key to './ca.key' +----- +``` + +- Now we are going to create a ca-secret using the certificate files that we have just generated. + +```bash +$ kubectl create secret tls ferretdb-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +secret/ferretdb-ca created +``` + +Now, Let's create an `Issuer` using the `ferretdb-ca` secret that we have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: ferretdb-ca-issuer + namespace: demo +spec: + ca: + secretName: ferretdb-ca +``` + +Let's apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/ferretdb/reconfigure-tls/issuer.yaml +issuer.cert-manager.io/ferretdb-issuer created +``` + +### Create FerretDBOpsRequest + +In order to add TLS to the ferretdb, we have to create a `FerretDBOpsRequest` CRO with our created issuer. Below is the YAML of the `FerretDBOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: FerretDBOpsRequest +metadata: + name: frops-add-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: ferretdb + tls: + issuerRef: + name: ferretdb-ca-issuer + kind: Issuer + apiGroup: "cert-manager.io" + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `mg-rs` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. +- `spec.tls.certificates` specifies the certificates. You can learn more about this field from [here](/docs/guides/ferretdb/concepts/ferretdb.md#spectls). +- `spec.tls.sslMode` is the ssl mode of the server. You can see the details [here](/docs/guides/ferretdb/concepts/ferretdb.md#specsslmode). +- The meaning of `spec.timeout` & `spec.apply` fields will be found [here](/docs/guides/ferretdb/concepts/opsrequest.md#spectimeout) + +Let's create the `FerretDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/ferretdb/reconfigure-tls/frops-add-tls.yaml +ferretdbopsrequest.ops.kubedb.com/frops-add-tls created +``` + +#### Verify TLS Enabled Successfully + +Let's wait for `FerretDBOpsRequest` to be `Successful`. Run the following command to watch `FerretDBOpsRequest` CRO, + +```bash +$ watch kubectl get ferretdbopsrequest -n demo +Every 2.0s: kubectl get ferretdbopsrequest -n demo +NAME TYPE STATUS AGE +frops-add-tls ReconfigureTLS Successful 13m +``` + +We can see from the above output that the `FerretDBOpsRequest` has succeeded. If we describe the `FerretDBOpsRequest` we will get an overview of the steps that were followed. + +```bash +$ kubectl describe ferretdbopsrequest -n demo frops-add-tls +Name: frops-add-tls +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: FerretDBOpsRequest +Metadata: + Creation Timestamp: 2024-10-17T11:15:12Z + Generation: 1 + Resource Version: 159329 + UID: 071189ab-275f-4a25-99b9-72da3fa2fb6a +Spec: + Apply: IfReady + Database Ref: + Name: ferretdb + Timeout: 5m + Tls: + Issuer Ref: + API Group: cert-manager.io + Kind: Issuer + Name: ferretdb-ca-issuer + Type: ReconfigureTLS +Status: + Conditions: + Last Transition Time: 2024-10-17T11:15:12Z + Message: FerretDB ops-request has started to reconfigure tls for FerretDB nodes + Observed Generation: 1 + Reason: ReconfigureTLS + Status: True + Type: ReconfigureTLS + Last Transition Time: 2024-10-17T11:15:15Z + Message: Successfully paused database + Observed Generation: 1 + Reason: DatabasePauseSucceeded + Status: True + Type: DatabasePauseSucceeded + Last Transition Time: 2024-10-17T11:15:20Z + Message: get certificate; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetCertificate + Last Transition Time: 2024-10-17T11:15:20Z + Message: ready condition; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: ReadyCondition + Last Transition Time: 2024-10-17T11:15:20Z + Message: issuing condition; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: IssuingCondition + Last Transition Time: 2024-10-17T11:15:20Z + Message: Successfully synced all certificates + Observed Generation: 1 + Reason: CertificateSynced + Status: True + Type: CertificateSynced + Last Transition Time: 2024-10-17T11:15:25Z + Message: successfully reconciled the FerretDB with tls configuration + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-10-17T11:15:30Z + Message: get pod; ConditionStatus:True; PodName:ferretdb-0 + Observed Generation: 1 + Status: True + Type: GetPod--ferretdb-0 + Last Transition Time: 2024-10-17T11:15:31Z + Message: evict pod; ConditionStatus:True; PodName:ferretdb-0 + Observed Generation: 1 + Status: True + Type: EvictPod--ferretdb-0 + Last Transition Time: 2024-10-17T11:15:35Z + Message: check pod running; ConditionStatus:True; PodName:ferretdb-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--ferretdb-0 + Last Transition Time: 2024-10-17T11:15:40Z + Message: get pod; ConditionStatus:True; PodName:ferretdb-1 + Observed Generation: 1 + Status: True + Type: GetPod--ferretdb-1 + Last Transition Time: 2024-10-17T11:15:41Z + Message: evict pod; ConditionStatus:True; PodName:ferretdb-1 + Observed Generation: 1 + Status: True + Type: EvictPod--ferretdb-1 + Last Transition Time: 2024-10-17T11:15:45Z + Message: check pod running; ConditionStatus:True; PodName:ferretdb-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--ferretdb-1 + Last Transition Time: 2024-10-17T11:15:50Z + Message: Successfully restarted all nodes + Observed Generation: 1 + Reason: RestartNodes + Status: True + Type: RestartNodes + Last Transition Time: 2024-10-17T11:15:51Z + Message: Successfully completed the ReconfigureTLS for FerretDB + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 13m KubeDB Ops-manager Operator Start processing for FerretDBOpsRequest: demo/frops-add-tls + Normal Starting 13m KubeDB Ops-manager Operator Pausing FerretDB database: demo/ferretdb + Normal Successful 13m KubeDB Ops-manager Operator Successfully paused FerretDB database: demo/ferretdb for FerretDBOpsRequest: frops-add-tls + Warning get certificate; ConditionStatus:True 13m KubeDB Ops-manager Operator get certificate; ConditionStatus:True + Warning ready condition; ConditionStatus:True 13m KubeDB Ops-manager Operator ready condition; ConditionStatus:True + Warning issuing condition; ConditionStatus:True 13m KubeDB Ops-manager Operator issuing condition; ConditionStatus:True + Warning get certificate; ConditionStatus:True 13m KubeDB Ops-manager Operator get certificate; ConditionStatus:True + Warning ready condition; ConditionStatus:True 13m KubeDB Ops-manager Operator ready condition; ConditionStatus:True + Warning issuing condition; ConditionStatus:True 13m KubeDB Ops-manager Operator issuing condition; ConditionStatus:True + Normal CertificateSynced 13m KubeDB Ops-manager Operator Successfully synced all certificates + Normal UpdatePetSets 13m KubeDB Ops-manager Operator successfully reconciled the FerretDB with tls configuration + Warning get pod; ConditionStatus:True; PodName:ferretdb-0 13m KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:ferretdb-0 + Warning evict pod; ConditionStatus:True; PodName:ferretdb-0 13m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:ferretdb-0 + Warning check pod running; ConditionStatus:True; PodName:ferretdb-0 13m KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:ferretdb-0 + Warning get pod; ConditionStatus:True; PodName:ferretdb-1 13m KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:ferretdb-1 + Warning evict pod; ConditionStatus:True; PodName:ferretdb-1 13m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:ferretdb-1 + Warning check pod running; ConditionStatus:True; PodName:ferretdb-1 13m KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:ferretdb-1 + Normal RestartNodes 13m KubeDB Ops-manager Operator Successfully restarted all nodes + Normal Starting 13m KubeDB Ops-manager Operator Resuming FerretDB database: demo/ferretdb + Normal Successful 13m KubeDB Ops-manager Operator Successfully resumed FerretDB database: demo/ferretdb for FerretDBOpsRequest: frops-add-tls +``` + +Now let's connect with this ferretdb with certs. We need save the client cert and key to two different files and make a pem file. +Additionally, to verify server, we need to store ca.crt. + +```bash +$ kubectl get secrets -n demo ferretdb-client-cert -o jsonpath='{.data.tls\.crt}' | base64 -d > client.crt +$ kubectl get secrets -n demo ferretdb-client-cert -o jsonpath='{.data.tls\.key}' | base64 -d > client.key +$ kubectl get secrets -n demo ferretdb-client-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt +$ cat client.crt client.key > client.pem +``` + +Now, we can connect to our FerretDB with these files with mongosh client. + +```bash +$ kubectl get secrets -n demo ferretdb-auth -o jsonpath='{.data.\username}' | base64 -d +postgres +$ kubectl get secrets -n demo ferretdb-auth -o jsonpath='{.data.\\password}' | base64 -d +l*jGp8u*El8WRSDJ + +$ kubectl port-forward svc/ferretdb -n demo 27017 +Forwarding from 127.0.0.1:27017 -> 27018 +Forwarding from [::1]:27017 -> 27018 +Handling connection for 27017 +Handling connection for 27017 +``` + +Now in another terminal + +```bash +$ mongosh 'mongodb://postgres:l*jGp8u*El8WRSDJ@localhost:27017/ferretdb?authMechanism=PLAIN&tls=true&tlsCertificateKeyFile=./client.pem&tlsCaFile=./ca.crt' +Current Mongosh Log ID: 65efeea2a3347fff66d04c70 +Connecting to: mongodb://@localhost:27017/ferretdb?authMechanism=PLAIN&directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.1.5 +Using MongoDB: 7.0.42 +Using Mongosh: 2.1.5 + +For mongosh info see: https://docs.mongodb.com/mongodb-shell/ + +------ + The server generated these startup warnings when booting + 2024-03-12T05:56:50.979Z: Powered by FerretDB v1.23.0 and PostgreSQL 13.13 on x86_64-pc-linux-musl, compiled by gcc. + 2024-03-12T05:56:50.979Z: Please star us on GitHub: https://github.com/FerretDB/FerretDB. + 2024-03-12T05:56:50.979Z: The telemetry state is undecided. + 2024-03-12T05:56:50.979Z: Read more about FerretDB telemetry and how to opt out at https://beacon.ferretdb.io. +------ + +ferretdb> +``` +So, here we have connected using the client certificate and the connection is tls secured. So, we can safely assume that tls enabling was successful. + +## Rotate Certificate + +Now we are going to rotate the certificate of this database. First let's check the current expiration date of the certificate. + +```bash +$ openssl x509 -in ./ca.crt -inform PEM -enddate -nameopt RFC2253 -noout +notAfter=Oct 14 10:20:07 2025 GMT +``` + +So, the certificate will expire on this time `Oct 14 10:20:07 2025 GMT`. + +### Create FerretDBOpsRequest + +Now we are going to increase it using a FerretDBOpsRequest. Below is the yaml of the ops request that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: FerretDBOpsRequest +metadata: + name: frops-rotate + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: ferretdb + tls: + rotateCertificates: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `ferretdb`. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our ferretdb. +- `spec.tls.rotateCertificates` specifies that we want to rotate the certificate of this ferretdb. + +Let's create the `FerretDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/ferretdb/reconfigure-tls/frops-rotate.yaml +ferretdbopsrequest.ops.kubedb.com/frops-rotate created +``` + +#### Verify Certificate Rotated Successfully + +Let's wait for `FerretDBOpsRequest` to be `Successful`. Run the following command to watch `FerretDBOpsRequest` CRO, + +```bash +$ watch kubectl get ferretdbopsrequest -n demo +Every 2.0s: kubectl get ferretdbopsrequest -n demo +NAME TYPE STATUS AGE +frops-rotate ReconfigureTLS Successful 113s +``` + +We can see from the above output that the `FerretDBOpsRequest` has succeeded. If we describe the `FerretDBOpsRequest` we will get an overview of the steps that were followed. + +```bash +$ kubectl describe ferretdbopsrequest -n demo frops-rotate +Name: frops-rotate +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: FerretDBOpsRequest +Metadata: + Creation Timestamp: 2024-10-17T11:37:29Z + Generation: 1 + Resource Version: 161772 + UID: 6d9acf23-2701-40f9-9187-da221f3e4158 +Spec: + Apply: IfReady + Database Ref: + Name: ferretdb + Tls: + Rotate Certificates: true + Type: ReconfigureTLS +Status: + Conditions: + Last Transition Time: 2024-10-17T11:37:29Z + Message: FerretDB ops-request has started to reconfigure tls for FerretDB nodes + Observed Generation: 1 + Reason: ReconfigureTLS + Status: True + Type: ReconfigureTLS + Last Transition Time: 2024-10-17T11:37:32Z + Message: Successfully paused database + Observed Generation: 1 + Reason: DatabasePauseSucceeded + Status: True + Type: DatabasePauseSucceeded + Last Transition Time: 2024-10-17T11:37:38Z + Message: get certificate; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetCertificate + Last Transition Time: 2024-10-17T11:37:38Z + Message: ready condition; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: ReadyCondition + Last Transition Time: 2024-10-17T11:37:38Z + Message: issuing condition; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: IssuingCondition + Last Transition Time: 2024-10-17T11:37:38Z + Message: Successfully synced all certificates + Observed Generation: 1 + Reason: CertificateSynced + Status: True + Type: CertificateSynced + Last Transition Time: 2024-10-17T11:37:43Z + Message: successfully reconciled the FerretDB with tls configuration + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-10-17T11:37:48Z + Message: get pod; ConditionStatus:True; PodName:ferretdb-0 + Observed Generation: 1 + Status: True + Type: GetPod--ferretdb-0 + Last Transition Time: 2024-10-17T11:37:48Z + Message: evict pod; ConditionStatus:True; PodName:ferretdb-0 + Observed Generation: 1 + Status: True + Type: EvictPod--ferretdb-0 + Last Transition Time: 2024-10-17T11:37:53Z + Message: check pod running; ConditionStatus:True; PodName:ferretdb-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--ferretdb-0 + Last Transition Time: 2024-10-17T11:37:58Z + Message: get pod; ConditionStatus:True; PodName:ferretdb-1 + Observed Generation: 1 + Status: True + Type: GetPod--ferretdb-1 + Last Transition Time: 2024-10-17T11:37:58Z + Message: evict pod; ConditionStatus:True; PodName:ferretdb-1 + Observed Generation: 1 + Status: True + Type: EvictPod--ferretdb-1 + Last Transition Time: 2024-10-17T11:38:03Z + Message: check pod running; ConditionStatus:True; PodName:ferretdb-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--ferretdb-1 + Last Transition Time: 2024-10-17T11:38:08Z + Message: Successfully restarted all nodes + Observed Generation: 1 + Reason: RestartNodes + Status: True + Type: RestartNodes + Last Transition Time: 2024-10-17T11:38:08Z + Message: Successfully completed the ReconfigureTLS for FerretDB + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 55s KubeDB Ops-manager Operator Start processing for FerretDBOpsRequest: demo/frops-rotate + Normal Starting 55s KubeDB Ops-manager Operator Pausing FerretDB database: demo/ferretdb + Normal Successful 55s KubeDB Ops-manager Operator Successfully paused FerretDB database: demo/ferretdb for FerretDBOpsRequest: frops-rotate + Warning get certificate; ConditionStatus:True 46s KubeDB Ops-manager Operator get certificate; ConditionStatus:True + Warning ready condition; ConditionStatus:True 46s KubeDB Ops-manager Operator ready condition; ConditionStatus:True + Warning issuing condition; ConditionStatus:True 46s KubeDB Ops-manager Operator issuing condition; ConditionStatus:True + Warning get certificate; ConditionStatus:True 46s KubeDB Ops-manager Operator get certificate; ConditionStatus:True + Warning ready condition; ConditionStatus:True 46s KubeDB Ops-manager Operator ready condition; ConditionStatus:True + Warning issuing condition; ConditionStatus:True 46s KubeDB Ops-manager Operator issuing condition; ConditionStatus:True + Normal CertificateSynced 46s KubeDB Ops-manager Operator Successfully synced all certificates + Normal UpdatePetSets 41s KubeDB Ops-manager Operator successfully reconciled the FerretDB with tls configuration + Warning get pod; ConditionStatus:True; PodName:ferretdb-0 36s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:ferretdb-0 + Warning evict pod; ConditionStatus:True; PodName:ferretdb-0 36s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:ferretdb-0 + Warning check pod running; ConditionStatus:True; PodName:ferretdb-0 31s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:ferretdb-0 + Warning get pod; ConditionStatus:True; PodName:ferretdb-1 26s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:ferretdb-1 + Warning evict pod; ConditionStatus:True; PodName:ferretdb-1 26s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:ferretdb-1 + Warning check pod running; ConditionStatus:True; PodName:ferretdb-1 21s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:ferretdb-1 + Normal RestartNodes 16s KubeDB Ops-manager Operator Successfully restarted all nodes + Normal Starting 16s KubeDB Ops-manager Operator Resuming FerretDB database: demo/ferretdb + Normal Successful 16s KubeDB Ops-manager Operator Successfully resumed FerretDB database: demo/ferretdb for FerretDBOpsRequest: frops-rotate +``` + +Now, let's check the expiration date of the certificate. + +```bash +$ kubectl exec -it -n demo ferretdb-0 -- bash master ⬆ ⬇ ✱ ◼ +ferretdb-0:/$ openssl x509 -in /opt/ferretdb-II/tls/ca.pem -inform PEM -enddate -nameopt RFC2253 -noout +notAfter=Oct 27 07:10:20 2024 GMT +``` + +As we can see from the above output, the certificate has been rotated successfully. + +## Change Issuer/ClusterIssuer + +Now, we are going to change the issuer of this database. + +- Let's create a new ca certificate and key using a different subject `CN=ca-update,O=kubedb-updated`. + +```bash +$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=ca-updated/O=kubedb-updated" +Generating a RSA private key +..............................................................+++++ +......................................................................................+++++ +writing new private key to './ca.key' +----- +``` + +- Now we are going to create a new ca-secret using the certificate files that we have just generated. + +```bash +$ kubectl create secret tls ferretdb-new-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +secret/ferretdb-new-ca created +``` + +Now, Let's create a new `Issuer` using the `ferretdb-new-ca` secret that we have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: fr-new-issuer + namespace: demo +spec: + ca: + secretName: ferretdb-new-ca +``` + +Let's apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/ferretdb/reconfigure-tls/new-issuer.yaml +issuer.cert-manager.io/fr-new-issuer created +``` + +### Create FerretDBOpsRequest + +In order to use the new issuer to issue new certificates, we have to create a `FerretDBOpsRequest` CRO with the newly created issuer. Below is the YAML of the `FerretDBOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: FerretDBOpsRequest +metadata: + name: ppops-change-issuer + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: ferretdb + tls: + issuerRef: + name: fr-new-issuer + kind: Issuer + apiGroup: "cert-manager.io" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `ferretdb`. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our ferretdb. +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. + +Let's create the `FerretDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/ferretdb/reconfigure-tls/ppops-change-issuer.yaml +ferretdbopsrequest.ops.kubedb.com/ppops-change-issuer created +``` + +#### Verify Issuer is changed successfully + +Let's wait for `FerretDBOpsRequest` to be `Successful`. Run the following command to watch `FerretDBOpsRequest` CRO, + +```bash +$ watch kubectl get ferretdbopsrequest -n demo +Every 2.0s: kubectl get ferretdbopsrequest -n demo +NAME TYPE STATUS AGE +ppops-change-issuer ReconfigureTLS Successful 87s +``` + +We can see from the above output that the `FerretDBOpsRequest` has succeeded. If we describe the `FerretDBOpsRequest` we will get an overview of the steps that were followed. + +```bash +$ kubectl describe ferretdbopsrequest -n demo ppops-change-issuer +Name: ppops-change-issuer +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: FerretDBOpsRequest +Metadata: + Creation Timestamp: 2024-07-29T07:37:09Z + Generation: 1 + Resource Version: 12367 + UID: f48452ed-7264-4e99-80f1-58d7e826d9a9 +Spec: + Apply: IfReady + Database Ref: + Name: ferretdb + Tls: + Issuer Ref: + API Group: cert-manager.io + Kind: Issuer + Name: fr-new-issuer + Type: ReconfigureTLS +Status: + Conditions: + Last Transition Time: 2024-07-29T07:37:09Z + Message: FerretDB ops-request has started to reconfigure tls for RabbitMQ nodes + Observed Generation: 1 + Reason: ReconfigureTLS + Status: True + Type: ReconfigureTLS + Last Transition Time: 2024-07-29T07:37:12Z + Message: Successfully paused database + Observed Generation: 1 + Reason: DatabasePauseSucceeded + Status: True + Type: DatabasePauseSucceeded + Last Transition Time: 2024-07-29T07:37:24Z + Message: Successfully synced all certificates + Observed Generation: 1 + Reason: CertificateSynced + Status: True + Type: CertificateSynced + Last Transition Time: 2024-07-29T07:37:18Z + Message: get certificate; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetCertificate + Last Transition Time: 2024-07-29T07:37:18Z + Message: check ready condition; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: CheckReadyCondition + Last Transition Time: 2024-07-29T07:37:18Z + Message: check issuing condition; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: CheckIssuingCondition + Last Transition Time: 2024-07-29T07:37:30Z + Message: successfully reconciled the FerretDB with TLS + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-07-29T07:38:15Z + Message: Successfully Restarted FerretDB pods + Observed Generation: 1 + Reason: RestartPods + Status: True + Type: RestartPods + Last Transition Time: 2024-07-29T07:37:35Z + Message: get pod; ConditionStatus:True; PodName:ferretdb-0 + Observed Generation: 1 + Status: True + Type: GetPod--ferretdb-0 + Last Transition Time: 2024-07-29T07:37:35Z + Message: evict pod; ConditionStatus:True; PodName:ferretdb-0 + Observed Generation: 1 + Status: True + Type: EvictPod--ferretdb-0 + Last Transition Time: 2024-07-29T07:38:10Z + Message: check pod running; ConditionStatus:True; PodName:ferretdb-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--ferretdb-0 + Last Transition Time: 2024-07-29T07:38:15Z + Message: Successfully updated FerretDB + Observed Generation: 1 + Reason: UpdateDatabase + Status: True + Type: UpdateDatabase + Last Transition Time: 2024-07-29T07:38:16Z + Message: Successfully updated FerretDB TLS + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 3m39s KubeDB Ops-manager Operator Start processing for FerretDBOpsRequest: demo/ppops-change-issuer + Normal Starting 3m39s KubeDB Ops-manager Operator Pausing FerretDB databse: demo/ferretdb + Normal Successful 3m39s KubeDB Ops-manager Operator Successfully paused FerretDB database: demo/ferretdb for FerretDBOpsRequest: ppops-change-issuer + Warning get certificate; ConditionStatus:True 3m30s KubeDB Ops-manager Operator get certificate; ConditionStatus:True + Warning check ready condition; ConditionStatus:True 3m30s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True + Warning check issuing condition; ConditionStatus:True 3m30s KubeDB Ops-manager Operator check issuing condition; ConditionStatus:True + Warning get certificate; ConditionStatus:True 3m30s KubeDB Ops-manager Operator get certificate; ConditionStatus:True + Warning check ready condition; ConditionStatus:True 3m30s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True + Warning check issuing condition; ConditionStatus:True 3m30s KubeDB Ops-manager Operator check issuing condition; ConditionStatus:True + Warning get certificate; ConditionStatus:True 3m30s KubeDB Ops-manager Operator get certificate; ConditionStatus:True + Warning check ready condition; ConditionStatus:True 3m30s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True + Warning check issuing condition; ConditionStatus:True 3m30s KubeDB Ops-manager Operator check issuing condition; ConditionStatus:True + Normal CertificateSynced 3m30s KubeDB Ops-manager Operator Successfully synced all certificates + Warning get certificate; ConditionStatus:True 3m25s KubeDB Ops-manager Operator get certificate; ConditionStatus:True + Warning check ready condition; ConditionStatus:True 3m25s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True + Warning check issuing condition; ConditionStatus:True 3m24s KubeDB Ops-manager Operator check issuing condition; ConditionStatus:True + Warning get certificate; ConditionStatus:True 3m24s KubeDB Ops-manager Operator get certificate; ConditionStatus:True + Warning check ready condition; ConditionStatus:True 3m24s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True + Warning check issuing condition; ConditionStatus:True 3m24s KubeDB Ops-manager Operator check issuing condition; ConditionStatus:True + Warning get certificate; ConditionStatus:True 3m24s KubeDB Ops-manager Operator get certificate; ConditionStatus:True + Warning check ready condition; ConditionStatus:True 3m24s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True + Warning check issuing condition; ConditionStatus:True 3m24s KubeDB Ops-manager Operator check issuing condition; ConditionStatus:True + Normal CertificateSynced 3m24s KubeDB Ops-manager Operator Successfully synced all certificates + Normal UpdatePetSets 3m18s KubeDB Ops-manager Operator successfully reconciled the FerretDB with TLS + Warning get pod; ConditionStatus:True; PodName:ferretdb-0 3m13s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:ferretdb-0 + Warning evict pod; ConditionStatus:True; PodName:ferretdb-0 3m13s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:ferretdb-0 + Warning check pod running; ConditionStatus:False; PodName:ferretdb-0 3m8s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:ferretdb-0 + Warning check pod running; ConditionStatus:True; PodName:ferretdb-0 2m38s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:ferretdb-0 + Normal RestartPods 2m33s KubeDB Ops-manager Operator Successfully Restarted FerretDB pods + Normal Starting 2m32s KubeDB Ops-manager Operator Resuming FerretDB database: demo/ferretdb + Normal Successful 2m32s KubeDB Ops-manager Operator Successfully resumed FerretDB database: demo/ferretdb for FerretDBOpsRequest: ppops-change-issuer +``` + +Now, Let's exec ferretdb and find out the ca subject to see if it matches the one we have provided. + +```bash +$ kubectl exec -it -n demo ferretdb-0 -- bash +ferretdb-0:/$ openssl x509 -in /opt/ferretdb-II/tls/ca.pem -inform PEM -subject -nameopt RFC2253 -noout +subject=O=kubedb-updated,CN=ca-updated +``` + +We can see from the above output that, the subject name matches the subject name of the new ca certificate that we have created. So, the issuer is changed successfully. + +## Remove TLS from the ferretdb + +Now, we are going to remove TLS from this ferretdb using a FerretDBOpsRequest. + +### Create FerretDBOpsRequest + +Below is the YAML of the `FerretDBOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: FerretDBOpsRequest +metadata: + name: ppops-remove + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: ferretdb + tls: + remove: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `ferretdb`. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our ferretdb. +- `spec.tls.remove` specifies that we want to remove tls from this ferretdb. + +Let's create the `FerretDBOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/ferretdb/reconfigure-tls/ppops-remove.yaml +ferretdbopsrequest.ops.kubedb.com/ppops-remove created +``` + +#### Verify TLS Removed Successfully + +Let's wait for `FerretDBOpsRequest` to be `Successful`. Run the following command to watch `FerretDBOpsRequest` CRO, + +```bash +$ wacth kubectl get ferretdbopsrequest -n demo +Every 2.0s: kubectl get ferretdbopsrequest -n demo +NAME TYPE STATUS AGE +ppops-remove ReconfigureTLS Successful 65s +``` + +We can see from the above output that the `FerretDBOpsRequest` has succeeded. If we describe the `FerretDBOpsRequest` we will get an overview of the steps that were followed. + +```bash +$ kubectl describe ferretdbopsrequest -n demo ppops-remove +Name: ppops-remove +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: FerretDBOpsRequest +Metadata: + Creation Timestamp: 2024-07-29T08:38:35Z + Generation: 1 + Resource Version: 16378 + UID: f848e04f-0fd1-48ce-813d-67dbdc3e4a55 +Spec: + Apply: IfReady + Database Ref: + Name: ferretdb + Tls: + Remove: true + Type: ReconfigureTLS +Status: + Conditions: + Last Transition Time: 2024-07-29T08:38:37Z + Message: FerretDB ops-request has started to reconfigure tls for RabbitMQ nodes + Observed Generation: 1 + Reason: ReconfigureTLS + Status: True + Type: ReconfigureTLS + Last Transition Time: 2024-07-29T08:38:41Z + Message: Successfully paused database + Observed Generation: 1 + Reason: DatabasePauseSucceeded + Status: True + Type: DatabasePauseSucceeded + Last Transition Time: 2024-07-29T08:38:47Z + Message: successfully reconciled the FerretDB with TLS + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-07-29T08:39:32Z + Message: Successfully Restarted FerretDB pods + Observed Generation: 1 + Reason: RestartPods + Status: True + Type: RestartPods + Last Transition Time: 2024-07-29T08:38:52Z + Message: get pod; ConditionStatus:True; PodName:ferretdb-0 + Observed Generation: 1 + Status: True + Type: GetPod--ferretdb-0 + Last Transition Time: 2024-07-29T08:38:52Z + Message: evict pod; ConditionStatus:True; PodName:ferretdb-0 + Observed Generation: 1 + Status: True + Type: EvictPod--ferretdb-0 + Last Transition Time: 2024-07-29T08:39:27Z + Message: check pod running; ConditionStatus:True; PodName:ferretdb-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--ferretdb-0 + Last Transition Time: 2024-07-29T08:39:32Z + Message: Successfully updated FerretDB + Observed Generation: 1 + Reason: UpdateDatabase + Status: True + Type: UpdateDatabase + Last Transition Time: 2024-07-29T08:39:33Z + Message: Successfully updated FerretDB TLS + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 84s KubeDB Ops-manager Operator Start processing for FerretDBOpsRequest: demo/ppops-remove + Normal Starting 84s KubeDB Ops-manager Operator Pausing FerretDB databse: demo/ferretdb + Normal Successful 83s KubeDB Ops-manager Operator Successfully paused FerretDB database: demo/ferretdb for FerretDBOpsRequest: ppops-remove + Normal UpdatePetSets 74s KubeDB Ops-manager Operator successfully reconciled the FerretDB with TLS + Warning get pod; ConditionStatus:True; PodName:ferretdb-0 69s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:ferretdb-0 + Warning evict pod; ConditionStatus:True; PodName:ferretdb-0 69s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:ferretdb-0 + Warning check pod running; ConditionStatus:False; PodName:ferretdb-0 64s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:ferretdb-0 + Warning check pod running; ConditionStatus:True; PodName:ferretdb-0 34s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:ferretdb-0 + Normal RestartPods 29s KubeDB Ops-manager Operator Successfully Restarted FerretDB pods + Normal Starting 29s KubeDB Ops-manager Operator Resuming FerretDB database: demo/ferretdb + Normal Successful 28s KubeDB Ops-manager Operator Successfully resumed FerretDB database: demo/ferretdb for FerretDBOpsRequest: ppops-remove +``` + +Now, Let's exec into ferretdb and find out that TLS is disabled or not. + +```bash +$ kubectl exec -it -n demo ferretdb-0 -- bash +ferretdb-0:/$ cat opt/ferretdb-II/etc/ferretdb.conf +backend_hostname0 = 'ha-postgres.demo.svc' +backend_port0 = 5432 +backend_weight0 = 1 +backend_flag0 = 'ALWAYS_PRIMARY|DISALLOW_TO_FAILOVER' +backend_hostname1 = 'ha-postgres-standby.demo.svc' +backend_port1 = 5432 +backend_weight1 = 1 +backend_flag1 = 'DISALLOW_TO_FAILOVER' +enable_pool_hba = on +listen_addresses = * +port = 9999 +socket_dir = '/var/run/ferretdb' +pcp_listen_addresses = * +pcp_port = 9595 +pcp_socket_dir = '/var/run/ferretdb' +log_per_node_statement = on +sr_check_period = 0 +health_check_period = 0 +backend_clustering_mode = 'streaming_replication' +num_init_children = 5 +max_pool = 15 +child_life_time = 300 +child_max_connections = 0 +connection_life_time = 0 +client_idle_limit = 0 +connection_cache = on +load_balance_mode = on +ssl = 'off' +failover_on_backend_error = 'off' +log_min_messages = 'warning' +statement_level_load_balance = 'off' +memory_cache_enabled = 'off' +memqcache_oiddir = '/tmp/oiddir/' +allow_clear_text_frontend_auth = 'false' +failover_on_backend_error = 'off' +``` + +We can see from the above output that `ssl='off'` so we can verify that TLS is disabled successfully for this ferretdb. + +## Cleaning up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete ferretdb -n demo ferretdb +kubectl delete issuer -n demo ferretdb-issuer fr-new-issuer +kubectl delete ferretdbopsrequest -n demo ppops-add-tls ppops-remove ppops-rotate ppops-change-issuer +kubectl delete pg -n demo ha-postgres +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [FerretDB object](/docs/guides/ferretdb/concepts/ferretdb.md). +- Monitor your FerretDB database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/ferretdb/monitoring/using-prometheus-operator.md). +- Monitor your FerretDB database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/ferretdb/monitoring/using-builtin-prometheus.md). +- Detail concepts of [FerretDB object](/docs/guides/ferretdb/concepts/ferretdb.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/images/ferretdb/fr-coreos-prom-target.png b/docs/images/ferretdb/fr-coreos-prom-target.png new file mode 100644 index 0000000000..86e6f18c39 Binary files /dev/null and b/docs/images/ferretdb/fr-coreos-prom-target.png differ diff --git a/docs/images/ferretdb/fr-reconfigure-tls.svg b/docs/images/ferretdb/fr-reconfigure-tls.svg new file mode 100644 index 0000000000..c7349f3849 --- /dev/null +++ b/docs/images/ferretdb/fr-reconfigure-tls.svg @@ -0,0 +1,4 @@ + + + +
1.Create FerretDB
1.Create Postgr...
2.Watch
2.Watch
3.Create
3.Create
4.Create
4.Initiate Upgr...
6.Pause
6.Pause
7.Update & Perform Checks
7.Update & Perform...
8.Update FerretDB
8.Update FerretDB
9.Resume
9.Resume
Upgrading stage
Upgrading stage
User
User
                Community            Operator
           PetSet
Statef...
5.Watch
5.Watch
            Enterprise            Operator
FerretDB OpsRequest
FerretDB Op...
FerretDB
Postgr...
Updated/New
PetSet
Upda...
refers to
refers to
Updated FerretDB
Upgrad...
Text is not SVG - cannot display
\ No newline at end of file