diff --git a/docs/guides/rabbitmq/README.md b/docs/guides/rabbitmq/README.md index 3455bc0412..722e6d6a46 100644 --- a/docs/guides/rabbitmq/README.md +++ b/docs/guides/rabbitmq/README.md @@ -16,34 +16,36 @@ aliases: # Overview -RabbitMQ is a robust and flexible open-source message broker software that facilitates communication between distributed applications. It implements the Advanced Message Queuing Protocol (AMQP) standard, ensuring reliable messaging across various platforms and languages. With its support for multiple messaging protocols and delivery patterns, RabbitMQ enables seamless integration and scalability for modern microservices architectures. It provides features such as message persistence, clustering, and high availability, making it a preferred choice for handling asynchronous communication and decoupling components in enterprise systems. +RabbitMQ is a robust and flexible open-source message broker software that facilitates communication between distributed applications. It implements the Advanced Message Queuing Protocol (AMQP) standard, ensuring reliable messaging across various platforms and languages. With its support for multiple messaging protocols (MQTT, STOMP etc.) and delivery patterns (Fanout, Direct, Exchange etc.), RabbitMQ enables seamless integration and scalability for modern microservices architectures. It provides features such as message persistence, clustering, and high availability, making it a preferred choice for handling asynchronous communication and decoupling components in enterprise systems. ## Supported RabbitMQ Features -| Features | Availability | -|----------------------------------------------------|:------------:| -| Clustering | ✓ | -| Authentication & Authorization | ✓ | -| Custom Configuration | ✓ | -| Monitoring using Prometheus and Grafana | ✓ | -| Builtin Prometheus Discovery | ✓ | -| Using Prometheus operator | ✓ | -| Externally manageable Auth Secret | ✓ | -| Reconfigurable Health Checker | ✓ | -| Persistent volume | ✓ | -| Dashboard ( Management UI ) | ✓ | -| Grafana Dashboards (Alerts and Monitoring) | ✓ | -| Custom Plugin configurations | ✓ | -| Pre-Enabled utility plugins ( Shovel, Federation ) | ✓ | -| Automatic Vertical Scaling | ✓ | -| Automatic Volume Expansion | ✓ | -| Autoscaling ( Compute resources & Storage ) | ✓ | - +| Features | Availability | +|---------------------------------------------------------------|:------------:| +| Clustering | ✓ | +| Custom Configuration | ✓ | +| Custom Plugin configurations | ✓ | +| Monitoring using Prometheus and Grafana | ✓ | +| Builtin Prometheus Discovery | ✓ | +| Operator managed Prometheus Discovery | ✓ | +| Authentication & Authorization (TLS) | ✓ | +| Externally manageable Auth Secret | ✓ | +| Persistent volume | ✓ | +| Grafana Dashboards (Alerts and Monitoring) | ✓ | +| Pre-Enabled Dashboard ( Management UI ) | ✓ | +| Pre-Enabled utility plugins ( Shovel, Federation ) | ✓ | +| Pre-Enabled Protocols with web dispatch ( AMQP, MQTT, STOMP ) | ✓ | +| Automated Vertical & Horizontal Scaling | ✓ | +| Automated Volume Expansion | ✓ | +| Autoscaling ( Compute resources & Storage ) | ✓ | +| Reconfigurable Health Checker | ✓ | +| Reconfigurable TLS Certificates (Add, Remove, Rotate, Update) | ✓ | ## Supported RabbitMQ Versions KubeDB supports the following RabbitMQ Versions. - `3.12.12` +- `3.13.2` ## Life Cycle of a RabbitMQ Object diff --git a/docs/guides/rabbitmq/_index.md b/docs/guides/rabbitmq/_index.md index a17b7cdc9f..7e30282d7d 100644 --- a/docs/guides/rabbitmq/_index.md +++ b/docs/guides/rabbitmq/_index.md @@ -2,8 +2,7 @@ title: RabbitMQ menu: docs_{{ .version }}: - identifier: guides-rabbitmq - name: RabbitMQ + identifier: rm-guides parent: guides weight: 10 menu_name: docs_{{ .version }} diff --git a/docs/guides/rabbitmq/autoscaler/_index.md b/docs/guides/rabbitmq/autoscaler/_index.md new file mode 100644 index 0000000000..bc2c9a589e --- /dev/null +++ b/docs/guides/rabbitmq/autoscaler/_index.md @@ -0,0 +1,10 @@ +--- +title: Autoscaling +menu: + docs_{{ .version }}: + identifier: mg-auto-scaling + name: Autoscaling + parent: mg-RabbitMQ-guides + weight: 46 +menu_name: docs_{{ .version }} +--- diff --git a/docs/guides/rabbitmq/autoscaler/compute/_index.md b/docs/guides/rabbitmq/autoscaler/compute/_index.md new file mode 100644 index 0000000000..31a2328359 --- /dev/null +++ b/docs/guides/rabbitmq/autoscaler/compute/_index.md @@ -0,0 +1,10 @@ +--- +title: Compute Autoscaling +menu: + docs_{{ .version }}: + identifier: mg-compute-auto-scaling + name: Compute Autoscaling + parent: mg-auto-scaling + weight: 46 +menu_name: docs_{{ .version }} +--- diff --git a/docs/guides/rabbitmq/autoscaler/compute/overview.md b/docs/guides/rabbitmq/autoscaler/compute/overview.md new file mode 100644 index 0000000000..5c47bce796 --- /dev/null +++ b/docs/guides/rabbitmq/autoscaler/compute/overview.md @@ -0,0 +1,55 @@ +--- +title: RabbitMQ Compute Autoscaling Overview +menu: + docs_{{ .version }}: + identifier: mg-auto-scaling-overview + name: Overview + parent: mg-compute-auto-scaling + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# RabbitMQ Compute Resource Autoscaling + +This guide will give an overview on how KubeDB Autoscaler operator autoscales the database compute resources i.e. cpu and memory using `RabbitMQautoscaler` crd. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [RabbitMQAutoscaler](/docs/guides/RabbitMQ/concepts/autoscaler.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + +## How Compute Autoscaling Works + +The following diagram shows how KubeDB Autoscaler operator autoscales the resources of `RabbitMQ` database components. Open the image in a new tab to see the enlarged version. + +
+  Compute Auto Scaling process of RabbitMQ +
Fig: Compute Auto Scaling process of RabbitMQ
+
+ +The Auto Scaling process consists of the following steps: + +1. At first, a user creates a `RabbitMQ` Custom Resource Object (CRO). + +2. `KubeDB` Provisioner operator watches the `RabbitMQ` CRO. + +3. When the operator finds a `RabbitMQ` CRO, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to set up autoscaling of the various components (ie. ReplicaSet, Shard, ConfigServer, Mongos, etc.) of the `RabbitMQ` database the user creates a `RabbitMQAutoscaler` CRO with desired configuration. + +5. `KubeDB` Autoscaler operator watches the `RabbitMQAutoscaler` CRO. + +6. `KubeDB` Autoscaler operator generates recommendation using the modified version of kubernetes [official recommender](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler/pkg/recommender) for different components of the database, as specified in the `RabbitMQAutoscaler` CRO. + +7. If the generated recommendation doesn't match the current resources of the database, then `KubeDB` Autoscaler operator creates a `RabbitMQOpsRequest` CRO to scale the database to match the recommendation generated. + +8. `KubeDB` Ops-manager operator watches the `RabbitMQOpsRequest` CRO. + +9. Then the `KubeDB` Ops-manager operator will scale the database component vertically as specified on the `RabbitMQOpsRequest` CRO. + +In the next docs, we are going to show a step by step guide on Autoscaling of various RabbitMQ database components using `RabbitMQAutoscaler` CRD. diff --git a/docs/guides/rabbitmq/autoscaler/compute/replicaset.md b/docs/guides/rabbitmq/autoscaler/compute/replicaset.md new file mode 100644 index 0000000000..4610c99039 --- /dev/null +++ b/docs/guides/rabbitmq/autoscaler/compute/replicaset.md @@ -0,0 +1,533 @@ +--- +title: RabbitMQ Replicaset Autoscaling +menu: + docs_{{ .version }}: + identifier: mg-auto-scaling-replicaset + name: Replicaset + parent: mg-compute-auto-scaling + weight: 20 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Autoscaling the Compute Resource of a RabbitMQ Replicaset Database + +This guide will show you how to use `KubeDB` to autoscale compute resources i.e. cpu and memory of a RabbitMQ replicaset database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner, Ops-manager and Autoscaler operator in your cluster following the steps [here](/docs/setup/README.md). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [RabbitMQAutoscaler](/docs/guides/RabbitMQ/concepts/autoscaler.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + - [Compute Resource Autoscaling Overview](/docs/guides/RabbitMQ/autoscaler/compute/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](/docs/examples/RabbitMQ) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Autoscaling of Replicaset Database + +Here, we are going to deploy a `RabbitMQ` Replicaset using a supported version by `KubeDB` operator. Then we are going to apply `RabbitMQAutoscaler` to set up autoscaling. + +#### Deploy RabbitMQ Replicaset + +In this section, we are going to deploy a RabbitMQ Replicaset database with version `4.4.26`. Then, in the next section we will set up autoscaling for this database using `RabbitMQAutoscaler` CRD. Below is the YAML of the `RabbitMQ` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mg-rs + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "replicaset" + replicas: 3 + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + limits: + cpu: "200m" + memory: "300Mi" + terminationPolicy: WipeOut + +``` + +Let's create the `RabbitMQ` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/autoscaling/compute/mg-rs.yaml +RabbitMQ.kubedb.com/mg-rs created +``` + +Now, wait until `mg-rs` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-rs 4.4.26 Ready 2m53s +``` + +Let's check the Pod containers resources, + +```bash +$ kubectl get pod -n demo mg-rs-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +Let's check the RabbitMQ resources, +```bash +$ kubectl get RabbitMQ -n demo mg-rs -o json | jq '.spec.podTemplate.spec.resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +You can see from the above outputs that the resources are same as the one we have assigned while deploying the RabbitMQ. + +We are now ready to apply the `RabbitMQAutoscaler` CRO to set up autoscaling for this database. + +### Compute Resource Autoscaling + +Here, we are going to set up compute resource autoscaling using a RabbitMQAutoscaler Object. + +#### Create RabbitMQAutoscaler Object + +In order to set up compute resource autoscaling for this replicaset database, we have to create a `RabbitMQAutoscaler` CRO with our desired configuration. Below is the YAML of the `RabbitMQAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: RabbitMQAutoscaler +metadata: + name: mg-as-rs + namespace: demo +spec: + databaseRef: + name: mg-rs + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + replicaSet: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 400m + memory: 400Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing compute resource scaling operation on `mg-rs` database. +- `spec.compute.replicaSet.trigger` specifies that compute autoscaling is enabled for this database. +- `spec.compute.replicaSet.podLifeTimeThreshold` specifies the minimum lifetime for at least one of the pod to initiate a vertical scaling. +- `spec.compute.replicaset.resourceDiffPercentage` specifies the minimum resource difference in percentage. The default is 10%. + If the difference between current & recommended resource is less than ResourceDiffPercentage, Autoscaler Operator will ignore the updating. +- `spec.compute.replicaSet.minAllowed` specifies the minimum allowed resources for the database. +- `spec.compute.replicaSet.maxAllowed` specifies the maximum allowed resources for the database. +- `spec.compute.replicaSet.controlledResources` specifies the resources that are controlled by the autoscaler. +- `spec.compute.replicaSet.containerControlledValues` specifies which resource values should be controlled. The default is "RequestsAndLimits". +- `spec.opsRequestOptions` contains the options to pass to the created OpsRequest. It has 3 fields. Know more about them here : [readinessCriteria](/docs/guides/RabbitMQ/concepts/opsrequest.md#specreadinesscriteria), [timeout](/docs/guides/RabbitMQ/concepts/opsrequest.md#spectimeout), [apply](/docs/guides/RabbitMQ/concepts/opsrequest.md#specapply). + +If it was an `InMemory database`, we could also autoscaler the inMemory resources using RabbitMQ compute autoscaler, like below. + +#### Autoscale inMemory database +To autoscale inMemory databases, you need to specify the `spec.compute.replicaSet.inMemoryStorage` section. + +```yaml + ... + inMemoryStorage: + usageThresholdPercentage: 80 + scalingFactorPercentage: 30 + ... +``` +It has two fields inside it. +- `usageThresholdPercentage`. If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased. Default usage threshold is 70%. +- `scalingFactorPercentage`. If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased by this given scaling percentage. Default scaling percentage is 50%. + +> Note: To inform you, We use `db.serverStatus().inMemory.cache["bytes currently in the cache"]` & `db.serverStatus().inMemory.cache["maximum bytes configured"]` to calculate the used & maximum inMemory storage respectively. + +Let's create the `RabbitMQAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/autoscaling/compute/mg-as-rs.yaml +RabbitMQautoscaler.autoscaling.kubedb.com/mg-as-rs created +``` + +#### Verify Autoscaling is set up successfully + +Let's check that the `RabbitMQautoscaler` resource is created successfully, + +```bash +$ kubectl get RabbitMQautoscaler -n demo +NAME AGE +mg-as-rs 102s + +$ kubectl describe RabbitMQautoscaler mg-as-rs -n demo +Name: mg-as-rs +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: RabbitMQAutoscaler +Metadata: + Creation Timestamp: 2022-10-27T06:56:34Z + Generation: 1 + Managed Fields: + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:compute: + .: + f:replicaSet: + .: + f:containerControlledValues: + f:controlledResources: + f:maxAllowed: + .: + f:cpu: + f:memory: + f:minAllowed: + .: + f:cpu: + f:memory: + f:podLifeTimeThreshold: + f:resourceDiffPercentage: + f:trigger: + f:databaseRef: + f:opsRequestOptions: + .: + f:apply: + f:timeout: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-10-27T06:56:34Z + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:checkpoints: + f:conditions: + f:vpas: + Manager: kubedb-autoscaler + Operation: Update + Subresource: status + Time: 2022-10-27T07:01:05Z + Resource Version: 640314 + UID: ab03414a-67a2-4da4-8960-6e67ae56b503 +Spec: + Compute: + Replica Set: + Container Controlled Values: RequestsAndLimits + Controlled Resources: + cpu + memory + Max Allowed: + Cpu: 1 + Memory: 1Gi + Min Allowed: + Cpu: 400m + Memory: 400Mi + Pod Life Time Threshold: 5m0s + Resource Diff Percentage: 20 + Trigger: On + Database Ref: + Name: mg-rs + Ops Request Options: + Apply: IfReady + Timeout: 3m0s +Status: + Checkpoints: + Cpu Histogram: + Bucket Weights: + Index: 2 + Weight: 10000 + Index: 3 + Weight: 5000 + Reference Timestamp: 2022-10-27T00:00:00Z + Total Weight: 0.3673624107285783 + First Sample Start: 2022-10-27T07:00:42Z + Last Sample Start: 2022-10-27T07:00:55Z + Last Update Time: 2022-10-27T07:01:00Z + Memory Histogram: + Reference Timestamp: 2022-10-28T00:00:00Z + Ref: + Container Name: RabbitMQ + Vpa Object Name: mg-rs + Total Samples Count: 3 + Version: v3 + Cpu Histogram: + Bucket Weights: + Index: 0 + Weight: 10000 + Reference Timestamp: 2022-10-27T00:00:00Z + Total Weight: 0.3673624107285783 + First Sample Start: 2022-10-27T07:00:42Z + Last Sample Start: 2022-10-27T07:00:55Z + Last Update Time: 2022-10-27T07:01:00Z + Memory Histogram: + Reference Timestamp: 2022-10-28T00:00:00Z + Ref: + Container Name: replication-mode-detector + Vpa Object Name: mg-rs + Total Samples Count: 3 + Version: v3 + Conditions: + Last Transition Time: 2022-10-27T07:01:05Z + Message: Successfully created RabbitMQOpsRequest demo/mops-mg-rs-cxhsy1 + Observed Generation: 1 + Reason: CreateOpsRequest + Status: True + Type: CreateOpsRequest + Vpas: + Conditions: + Last Transition Time: 2022-10-27T07:01:00Z + Status: True + Type: RecommendationProvided + Recommendation: + Container Recommendations: + Container Name: RabbitMQ + Lower Bound: + Cpu: 400m + Memory: 400Mi + Target: + Cpu: 400m + Memory: 400Mi + Uncapped Target: + Cpu: 49m + Memory: 262144k + Upper Bound: + Cpu: 1 + Memory: 1Gi + Vpa Name: mg-rs +Events: +``` +So, the `RabbitMQautoscaler` resource is created successfully. + +you can see in the `Status.VPAs.Recommendation` section, that recommendation has been generated for our database. Our autoscaler operator continuously watches the recommendation generated and creates an `RabbitMQopsrequest` based on the recommendations, if the database pods are needed to scaled up or down. + +Let's watch the `RabbitMQopsrequest` in the demo namespace to see if any `RabbitMQopsrequest` object is created. After some time you'll see that a `RabbitMQopsrequest` will be created based on the recommendation. + +```bash +$ watch kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-mg-rs-cxhsy1 VerticalScaling Progressing 10s +``` + +Let's wait for the ops request to become successful. + +```bash +$ watch kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-mg-rs-cxhsy1 VerticalScaling Successful 68s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to scale the database. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-mg-rs-cxhsy1 +Name: mops-mg-rs-cxhsy1 +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2022-10-27T07:01:05Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:ownerReferences: + .: + k:{"uid":"ab03414a-67a2-4da4-8960-6e67ae56b503"}: + f:spec: + .: + f:apply: + f:databaseRef: + f:timeout: + f:type: + f:verticalScaling: + .: + f:replicaSet: + .: + f:limits: + .: + f:cpu: + f:memory: + f:requests: + .: + f:cpu: + f:memory: + Manager: kubedb-autoscaler + Operation: Update + Time: 2022-10-27T07:01:05Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-10-27T07:02:31Z + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: RabbitMQAutoscaler + Name: mg-as-rs + UID: ab03414a-67a2-4da4-8960-6e67ae56b503 + Resource Version: 640598 + UID: f7c6db00-dd0e-4850-8bad-5f0855ce3850 +Spec: + Apply: IfReady + Database Ref: + Name: mg-rs + Timeout: 3m0s + Type: VerticalScaling + Vertical Scaling: + Replica Set: + Limits: + Cpu: 400m + Memory: 400Mi + Requests: + Cpu: 400m + Memory: 400Mi +Status: + Conditions: + Last Transition Time: 2022-10-27T07:01:05Z + Message: RabbitMQ ops request is vertically scaling database + Observed Generation: 1 + Reason: VerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2022-10-27T07:02:30Z + Message: Successfully Vertically Scaled Replicaset Resources + Observed Generation: 1 + Reason: UpdateReplicaSetResources + Status: True + Type: UpdateReplicaSetResources + Last Transition Time: 2022-10-27T07:02:31Z + Message: Successfully Vertically Scaled Database + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 4m9s KubeDB Ops-manager Operator Pausing RabbitMQ demo/mg-rs + Normal PauseDatabase 4m9s KubeDB Ops-manager Operator Successfully paused RabbitMQ demo/mg-rs + Normal Starting 4m9s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-rs + Normal UpdateReplicaSetResources 4m9s KubeDB Ops-manager Operator Successfully updated replicaset Resources + Normal Starting 4m9s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-rs + Normal UpdateReplicaSetResources 4m9s KubeDB Ops-manager Operator Successfully updated replicaset Resources + Normal UpdateReplicaSetResources 2m44s KubeDB Ops-manager Operator Successfully Vertically Scaled Replicaset Resources + Normal ResumeDatabase 2m43s KubeDB Ops-manager Operator Resuming RabbitMQ demo/mg-rs + Normal ResumeDatabase 2m43s KubeDB Ops-manager Operator Successfully resumed RabbitMQ demo/mg-rs + Normal Successful 2m43s KubeDB Ops-manager Operator Successfully Vertically Scaled Database + Normal UpdateReplicaSetResources 2m43s KubeDB Ops-manager Operator Successfully Vertically Scaled Replicaset Resources + +``` + +Now, we are going to verify from the Pod, and the RabbitMQ yaml whether the resources of the replicaset database has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo mg-rs-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "400m", + "memory": "400Mi" + }, + "requests": { + "cpu": "400m", + "memory": "400Mi" + } +} + +$ kubectl get RabbitMQ -n demo mg-rs -o json | jq '.spec.podTemplate.spec.resources' +{ + "limits": { + "cpu": "400m", + "memory": "400Mi" + }, + "requests": { + "cpu": "400m", + "memory": "400Mi" + } +} +``` + + +The above output verifies that we have successfully auto scaled the resources of the RabbitMQ replicaset database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-rs +kubectl delete RabbitMQautoscaler -n demo mg-as-rs +``` \ No newline at end of file diff --git a/docs/guides/rabbitmq/autoscaler/compute/sharding.md b/docs/guides/rabbitmq/autoscaler/compute/sharding.md new file mode 100644 index 0000000000..7772f435b1 --- /dev/null +++ b/docs/guides/rabbitmq/autoscaler/compute/sharding.md @@ -0,0 +1,571 @@ +--- +title: RabbitMQ Shard Autoscaling +menu: + docs_{{ .version }}: + identifier: mg-auto-scaling-shard + name: Sharding + parent: mg-compute-auto-scaling + weight: 25 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Autoscaling the Compute Resource of a RabbitMQ Sharded Database + +This guide will show you how to use `KubeDB` to autoscale compute resources i.e. cpu and memory of a RabbitMQ sharded database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner, Ops-manager and Autoscaler operator in your cluster following the steps [here](/docs/setup/README.md). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [RabbitMQAutoscaler](/docs/guides/RabbitMQ/concepts/autoscaler.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + - [Compute Resource Autoscaling Overview](/docs/guides/RabbitMQ/autoscaler/compute/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](/docs/examples/RabbitMQ) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Autoscaling of Sharded Database + +Here, we are going to deploy a `RabbitMQ` sharded database using a supported version by `KubeDB` operator. Then we are going to apply `RabbitMQAutoscaler` to set up autoscaling. + +#### Deploy RabbitMQ Sharded Database + +In this section, we are going to deploy a RabbitMQ sharded database with version `4.4.26`. Then, in the next section we will set up autoscaling for this database using `RabbitMQAutoscaler` CRD. Below is the YAML of the `RabbitMQ` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mg-sh + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + shardTopology: + configServer: + storage: + resources: + requests: + storage: 1Gi + replicas: 3 + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + mongos: + replicas: 2 + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + shard: + storage: + resources: + requests: + storage: 1Gi + replicas: 3 + shards: 2 + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + terminationPolicy: WipeOut +``` + +Let's create the `RabbitMQ` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/autoscaling/compute/mg-sh.yaml +RabbitMQ.kubedb.com/mg-sh created +``` + +Now, wait until `mg-sh` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-sh 4.4.26 Ready 3m57s +``` + +Let's check a shard Pod containers resources, + +```bash +$ kubectl get pod -n demo mg-sh-shard0-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +Let's check the RabbitMQ resources, +```bash +$ kubectl get RabbitMQ -n demo mg-sh -o json | jq '.spec.shardTopology.shard.podTemplate.spec.resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +You can see from the above outputs that the resources are same as the one we have assigned while deploying the RabbitMQ. + +We are now ready to apply the `RabbitMQAutoscaler` CRO to set up autoscaling for this database. + +### Compute Resource Autoscaling + +Here, we are going to set up compute resource autoscaling using a RabbitMQAutoscaler Object. + +#### Create RabbitMQAutoscaler Object + +In order to set up compute resource autoscaling for the shard pod of the database, we have to create a `RabbitMQAutoscaler` CRO with our desired configuration. Below is the YAML of the `RabbitMQAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: RabbitMQAutoscaler +metadata: + name: mg-as-sh + namespace: demo +spec: + databaseRef: + name: mg-sh + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + shard: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 400m + memory: 400Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing compute resource scaling operation on `mg-sh` database. +- `spec.compute.shard.trigger` specifies that compute autoscaling is enabled for the shard pods of this database. +- `spec.compute.shard.podLifeTimeThreshold` specifies the minimum lifetime for at least one of the pod to initiate a vertical scaling. +- `spec.compute.replicaset.resourceDiffPercentage` specifies the minimum resource difference in percentage. The default is 10%. + If the difference between current & recommended resource is less than ResourceDiffPercentage, Autoscaler Operator will ignore the updating. +- `spec.compute.shard.minAllowed` specifies the minimum allowed resources for the database. +- `spec.compute.shard.maxAllowed` specifies the maximum allowed resources for the database. +- `spec.compute.shard.controlledResources` specifies the resources that are controlled by the autoscaler. +- `spec.compute.shard.containerControlledValues` specifies which resource values should be controlled. The default is "RequestsAndLimits". +- `spec.opsRequestOptions` contains the options to pass to the created OpsRequest. It has 3 fields. Know more about them here : [readinessCriteria](/docs/guides/RabbitMQ/concepts/opsrequest.md#specreadinesscriteria), [timeout](/docs/guides/RabbitMQ/concepts/opsrequest.md#spectimeout), [apply](/docs/guides/RabbitMQ/concepts/opsrequest.md#specapply). +> Note: In this demo we are only setting up the autoscaling for the shard pods, that's why we only specified the shard section of the autoscaler. You can enable autoscaling for mongos and configServer pods in the same yaml, by specifying the `spec.compute.mongos` and `spec.compute.configServer` section, similar to the `spec.comput.shard` section we have configured in this demo. + +If it was an `InMemory database`, we could also autoscaler the inMemory resources using RabbitMQ compute autoscaler, like below. + +#### Autoscale inMemory database +To autoscale inMemory databases, you need to specify the `spec.compute.shard.inMemoryStorage` section. + +```yaml + ... + inMemoryStorage: + usageThresholdPercentage: 80 + scalingFactorPercentage: 30 + ... +``` +It has two fields inside it. +- `usageThresholdPercentage`. If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased. Default usage threshold is 70%. +- `scalingFactorPercentage`. If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased by this given scaling percentage. Default scaling percentage is 50%. + +> Note: To inform you, We use `db.serverStatus().inMemory.cache["bytes currently in the cache"]` & `db.serverStatus().inMemory.cache["maximum bytes configured"]` to calculate the used & maximum inMemory storage respectively. + + +Let's create the `RabbitMQAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/autoscaling/compute/mg-as-sh.yaml +RabbitMQautoscaler.autoscaling.kubedb.com/mg-as-sh created +``` + +#### Verify Autoscaling is set up successfully + +Let's check that the `RabbitMQautoscaler` resource is created successfully, + +```bash +$ kubectl get RabbitMQautoscaler -n demo +NAME AGE +mg-as-sh 102s + +$ kubectl describe RabbitMQautoscaler mg-as-sh -n demo +Name: mg-as-sh +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: RabbitMQAutoscaler +Metadata: + Creation Timestamp: 2022-10-27T09:46:48Z + Generation: 1 + Managed Fields: + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:compute: + .: + f:shard: + .: + f:containerControlledValues: + f:controlledResources: + f:maxAllowed: + .: + f:cpu: + f:memory: + f:minAllowed: + .: + f:cpu: + f:memory: + f:podLifeTimeThreshold: + f:resourceDiffPercentage: + f:trigger: + f:databaseRef: + f:opsRequestOptions: + .: + f:apply: + f:timeout: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-10-27T09:46:48Z + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:checkpoints: + f:conditions: + f:vpas: + Manager: kubedb-autoscaler + Operation: Update + Subresource: status + Time: 2022-10-27T09:47:08Z + Resource Version: 654853 + UID: 36878e8e-f100-409e-aa76-e6f46569df76 +Spec: + Compute: + Shard: + Container Controlled Values: RequestsAndLimits + Controlled Resources: + cpu + memory + Max Allowed: + Cpu: 1 + Memory: 1Gi + Min Allowed: + Cpu: 400m + Memory: 400Mi + Pod Life Time Threshold: 5m0s + Resource Diff Percentage: 20 + Trigger: On + Database Ref: + Name: mg-sh + Ops Request Options: + Apply: IfReady + Timeout: 3m0s +Status: + Checkpoints: + Cpu Histogram: + Bucket Weights: + Index: 1 + Weight: 5001 + Index: 2 + Weight: 10000 + Reference Timestamp: 2022-10-27T00:00:00Z + Total Weight: 0.397915611757652 + First Sample Start: 2022-10-27T09:46:43Z + Last Sample Start: 2022-10-27T09:46:57Z + Last Update Time: 2022-10-27T09:47:06Z + Memory Histogram: + Reference Timestamp: 2022-10-28T00:00:00Z + Ref: + Container Name: RabbitMQ + Vpa Object Name: mg-sh-shard0 + Total Samples Count: 3 + Version: v3 + Cpu Histogram: + Bucket Weights: + Index: 1 + Weight: 10000 + Reference Timestamp: 2022-10-27T00:00:00Z + Total Weight: 0.39793263724156597 + First Sample Start: 2022-10-27T09:46:50Z + Last Sample Start: 2022-10-27T09:46:56Z + Last Update Time: 2022-10-27T09:47:06Z + Memory Histogram: + Reference Timestamp: 2022-10-28T00:00:00Z + Ref: + Container Name: RabbitMQ + Vpa Object Name: mg-sh-shard1 + Total Samples Count: 3 + Version: v3 + Conditions: + Last Transition Time: 2022-10-27T09:47:08Z + Message: Successfully created RabbitMQOpsRequest demo/mops-vpa-mg-sh-shard-ml75qi + Observed Generation: 1 + Reason: CreateOpsRequest + Status: True + Type: CreateOpsRequest + Vpas: + Conditions: + Last Transition Time: 2022-10-27T09:47:06Z + Status: True + Type: RecommendationProvided + Recommendation: + Container Recommendations: + Container Name: RabbitMQ + Lower Bound: + Cpu: 400m + Memory: 400Mi + Target: + Cpu: 400m + Memory: 400Mi + Uncapped Target: + Cpu: 35m + Memory: 262144k + Upper Bound: + Cpu: 1 + Memory: 1Gi + Vpa Name: mg-sh-shard0 + Conditions: + Last Transition Time: 2022-10-27T09:47:06Z + Status: True + Type: RecommendationProvided + Recommendation: + Container Recommendations: + Container Name: RabbitMQ + Lower Bound: + Cpu: 400m + Memory: 400Mi + Target: + Cpu: 400m + Memory: 400Mi + Uncapped Target: + Cpu: 25m + Memory: 262144k + Upper Bound: + Cpu: 1 + Memory: 1Gi + Vpa Name: mg-sh-shard1 +Events: + +``` +So, the `RabbitMQautoscaler` resource is created successfully. + +you can see in the `Status.VPAs.Recommendation` section, that recommendation has been generated for our database. Our autoscaler operator continuously watches the recommendation generated and creates an `RabbitMQopsrequest` based on the recommendations, if the database pods are needed to scaled up or down. + +Let's watch the `RabbitMQopsrequest` in the demo namespace to see if any `RabbitMQopsrequest` object is created. After some time you'll see that a `RabbitMQopsrequest` will be created based on the recommendation. + +```bash +$ watch kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-vpa-mg-sh-shard-ml75qi VerticalScaling Progressing 19s +``` + +Let's wait for the ops request to become successful. + +```bash +$ watch kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-vpa-mg-sh-shard-ml75qi VerticalScaling Successful 5m8s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to scale the database. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-vpa-mg-sh-shard-ml75qi +Name: mops-vpa-mg-sh-shard-ml75qi +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2022-10-27T09:47:08Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:ownerReferences: + .: + k:{"uid":"36878e8e-f100-409e-aa76-e6f46569df76"}: + f:spec: + .: + f:apply: + f:databaseRef: + f:timeout: + f:type: + f:verticalScaling: + .: + f:shard: + .: + f:limits: + .: + f:memory: + f:requests: + .: + f:cpu: + f:memory: + Manager: kubedb-autoscaler + Operation: Update + Time: 2022-10-27T09:47:08Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-10-27T09:49:49Z + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: RabbitMQAutoscaler + Name: mg-as-sh + UID: 36878e8e-f100-409e-aa76-e6f46569df76 + Resource Version: 655347 + UID: c44fbd53-40f9-42ca-9b4c-823d8e998d01 +Spec: + Apply: IfReady + Database Ref: + Name: mg-sh + Timeout: 3m0s + Type: VerticalScaling + Vertical Scaling: + Shard: + Limits: + Memory: 400Mi + Requests: + Cpu: 400m + Memory: 400Mi +Status: + Conditions: + Last Transition Time: 2022-10-27T09:47:08Z + Message: RabbitMQ ops request is vertically scaling database + Observed Generation: 1 + Reason: VerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2022-10-27T09:49:49Z + Message: Successfully Vertically Scaled Shard Resources + Observed Generation: 1 + Reason: UpdateShardResources + Status: True + Type: UpdateShardResources + Last Transition Time: 2022-10-27T09:49:49Z + Message: Successfully Vertically Scaled Database + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 3m27s KubeDB Ops-manager Operator Pausing RabbitMQ demo/mg-sh + Normal PauseDatabase 3m27s KubeDB Ops-manager Operator Successfully paused RabbitMQ demo/mg-sh + Normal Starting 3m27s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-sh-shard0 + Normal Starting 3m27s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-sh-shard1 + Normal UpdateShardResources 3m27s KubeDB Ops-manager Operator Successfully updated Shard Resources + Normal Starting 3m27s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-sh-shard0 + Normal Starting 3m27s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-sh-shard1 + Normal UpdateShardResources 3m27s KubeDB Ops-manager Operator Successfully updated Shard Resources + Normal UpdateShardResources 46s KubeDB Ops-manager Operator Successfully Vertically Scaled Shard Resources + Normal ResumeDatabase 46s KubeDB Ops-manager Operator Resuming RabbitMQ demo/mg-sh + Normal ResumeDatabase 46s KubeDB Ops-manager Operator Successfully resumed RabbitMQ demo/mg-sh + Normal Successful 46s KubeDB Ops-manager Operator Successfully Vertically Scaled Database +``` + +Now, we are going to verify from the Pod, and the RabbitMQ yaml whether the resources of the shard pod of the database has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo mg-sh-shard0-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "memory": "400Mi" + }, + "requests": { + "cpu": "400m", + "memory": "400Mi" + } +} + + +$ kubectl get RabbitMQ -n demo mg-sh -o json | jq '.spec.shardTopology.shard.podTemplate.spec.resources' +{ + "limits": { + "memory": "400Mi" + }, + "requests": { + "cpu": "400m", + "memory": "400Mi" + } +} + +``` + + +The above output verifies that we have successfully auto scaled the resources of the RabbitMQ sharded database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-sh +kubectl delete RabbitMQautoscaler -n demo mg-as-sh +``` \ No newline at end of file diff --git a/docs/guides/rabbitmq/autoscaler/compute/standalone.md b/docs/guides/rabbitmq/autoscaler/compute/standalone.md new file mode 100644 index 0000000000..d72799b003 --- /dev/null +++ b/docs/guides/rabbitmq/autoscaler/compute/standalone.md @@ -0,0 +1,511 @@ +--- +title: RabbitMQ Standalone Autoscaling +menu: + docs_{{ .version }}: + identifier: mg-auto-scaling-standalone + name: Standalone + parent: mg-compute-auto-scaling + weight: 15 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Autoscaling the Compute Resource of a RabbitMQ Standalone Database + +This guide will show you how to use `KubeDB` to autoscale compute resources i.e. cpu and memory of a RabbitMQ standalone database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner, Ops-manager and Autoscaler operator in your cluster following the steps [here](/docs/setup/README.md). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [RabbitMQAutoscaler](/docs/guides/RabbitMQ/concepts/autoscaler.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + - [Compute Resource Autoscaling Overview](/docs/guides/RabbitMQ/autoscaler/compute/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](/docs/examples/RabbitMQ) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Autoscaling of Standalone Database + +Here, we are going to deploy a `RabbitMQ` standalone using a supported version by `KubeDB` operator. Then we are going to apply `RabbitMQAutoscaler` to set up autoscaling. + +#### Deploy RabbitMQ standalone + +In this section, we are going to deploy a RabbitMQ standalone database with version `4.4.26`. Then, in the next section we will set up autoscaling for this database using `RabbitMQAutoscaler` CRD. Below is the YAML of the `RabbitMQ` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mg-standalone + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "300Mi" + limits: + cpu: "200m" + memory: "300Mi" + terminationPolicy: WipeOut +``` + +Let's create the `RabbitMQ` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/autoscaling/compute/mg-standalone.yaml +RabbitMQ.kubedb.com/mg-standalone created +``` + +Now, wait until `mg-standalone` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-standalone 4.4.26 Ready 2m53s +``` + +Let's check the Pod containers resources, + +```bash +$ kubectl get pod -n demo mg-standalone-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +Let's check the RabbitMQ resources, +```bash +$ kubectl get RabbitMQ -n demo mg-standalone -o json | jq '.spec.podTemplate.spec.resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +You can see from the above outputs that the resources are same as the one we have assigned while deploying the RabbitMQ. + +We are now ready to apply the `RabbitMQAutoscaler` CRO to set up autoscaling for this database. + +### Compute Resource Autoscaling + +Here, we are going to set up compute (cpu and memory) autoscaling using a RabbitMQAutoscaler Object. + +#### Create RabbitMQAutoscaler Object + +In order to set up compute resource autoscaling for this standalone database, we have to create a `RabbitMQAutoscaler` CRO with our desired configuration. Below is the YAML of the `RabbitMQAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: RabbitMQAutoscaler +metadata: + name: mg-as + namespace: demo +spec: + databaseRef: + name: mg-standalone + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + standalone: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 400m + memory: 400Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing compute resource autoscaling on `mg-standalone` database. +- `spec.compute.standalone.trigger` specifies that compute resource autoscaling is enabled for this database. +- `spec.compute.standalone.podLifeTimeThreshold` specifies the minimum lifetime for at least one of the pod to initiate a vertical scaling. +- `spec.compute.replicaset.resourceDiffPercentage` specifies the minimum resource difference in percentage. The default is 10%. + If the difference between current & recommended resource is less than ResourceDiffPercentage, Autoscaler Operator will ignore the updating. +- `spec.compute.standalone.minAllowed` specifies the minimum allowed resources for the database. +- `spec.compute.standalone.maxAllowed` specifies the maximum allowed resources for the database. +- `spec.compute.standalone.controlledResources` specifies the resources that are controlled by the autoscaler. +- `spec.compute.standalone.containerControlledValues` specifies which resource values should be controlled. The default is "RequestsAndLimits". +- `spec.opsRequestOptions` contains the options to pass to the created OpsRequest. It has 3 fields. Know more about them here : [readinessCriteria](/docs/guides/RabbitMQ/concepts/opsrequest.md#specreadinesscriteria), [timeout](/docs/guides/RabbitMQ/concepts/opsrequest.md#spectimeout), [apply](/docs/guides/RabbitMQ/concepts/opsrequest.md#specapply). + +If it was an `InMemory database`, we could also autoscaler the inMemory resources using RabbitMQ compute autoscaler, like below. + +#### Autoscale inMemory database +To autoscale inMemory databases, you need to specify the `spec.compute.standalone.inMemoryStorage` section. + +```yaml + ... + inMemoryStorage: + usageThresholdPercentage: 80 + scalingFactorPercentage: 30 + ... +``` +It has two fields inside it. +- `usageThresholdPercentage`. If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased. Default usage threshold is 70%. +- `scalingFactorPercentage`. If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased by this given scaling percentage. Default scaling percentage is 50%. + +> Note: To inform you, We use `db.serverStatus().inMemory.cache["bytes currently in the cache"]` & `db.serverStatus().inMemory.cache["maximum bytes configured"]` to calculate the used & maximum inMemory storage respectively. + + +Let's create the `RabbitMQAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/autoscaling/compute/mg-as-standalone.yaml +RabbitMQautoscaler.autoscaling.kubedb.com/mg-as created +``` + +#### Verify Autoscaling is set up successfully + +Let's check that the `RabbitMQautoscaler` resource is created successfully, + +```bash +$ kubectl get RabbitMQautoscaler -n demo +NAME AGE +mg-as 102s + +$ kubectl describe RabbitMQautoscaler mg-as -n demo +Name: mg-as +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: RabbitMQAutoscaler +Metadata: + Creation Timestamp: 2022-10-27T09:54:35Z + Generation: 1 + Managed Fields: + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:compute: + .: + f:standalone: + .: + f:containerControlledValues: + f:controlledResources: + f:maxAllowed: + .: + f:cpu: + f:memory: + f:minAllowed: + .: + f:cpu: + f:memory: + f:podLifeTimeThreshold: + f:resourceDiffPercentage: + f:trigger: + f:databaseRef: + f:opsRequestOptions: + .: + f:apply: + f:timeout: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-10-27T09:54:35Z + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:checkpoints: + f:conditions: + f:vpas: + Manager: kubedb-autoscaler + Operation: Update + Subresource: status + Time: 2022-10-27T09:55:08Z + Resource Version: 656164 + UID: 439c148f-7c22-456f-a4b4-758cead29932 +Spec: + Compute: + Standalone: + Container Controlled Values: RequestsAndLimits + Controlled Resources: + cpu + memory + Max Allowed: + Cpu: 1 + Memory: 1Gi + Min Allowed: + Cpu: 400m + Memory: 400Mi + Pod Life Time Threshold: 5m0s + Resource Diff Percentage: 20 + Trigger: On + Database Ref: + Name: mg-standalone + Ops Request Options: + Apply: IfReady + Timeout: 3m0s +Status: + Checkpoints: + Cpu Histogram: + Bucket Weights: + Index: 6 + Weight: 10000 + Reference Timestamp: 2022-10-27T00:00:00Z + Total Weight: 0.133158834498727 + First Sample Start: 2022-10-27T09:54:56Z + Last Sample Start: 2022-10-27T09:54:56Z + Last Update Time: 2022-10-27T09:55:07Z + Memory Histogram: + Reference Timestamp: 2022-10-28T00:00:00Z + Ref: + Container Name: RabbitMQ + Vpa Object Name: mg-standalone + Total Samples Count: 1 + Version: v3 + Conditions: + Last Transition Time: 2022-10-27T09:55:08Z + Message: Successfully created RabbitMQOpsRequest demo/mops-mg-standalone-57huq2 + Observed Generation: 1 + Reason: CreateOpsRequest + Status: True + Type: CreateOpsRequest + Vpas: + Conditions: + Last Transition Time: 2022-10-27T09:55:07Z + Status: True + Type: RecommendationProvided + Recommendation: + Container Recommendations: + Container Name: RabbitMQ + Lower Bound: + Cpu: 400m + Memory: 400Mi + Target: + Cpu: 400m + Memory: 400Mi + Uncapped Target: + Cpu: 93m + Memory: 262144k + Upper Bound: + Cpu: 1 + Memory: 1Gi + Vpa Name: mg-standalone +Events: + +``` +So, the `RabbitMQautoscaler` resource is created successfully. + +you can see in the `Status.VPAs.Recommendation` section, that recommendation has been generated for our database. Our autoscaler operator continuously watches the recommendation generated and creates an `RabbitMQopsrequest` based on the recommendations, if the database pods are needed to scaled up or down. + +Let's watch the `RabbitMQopsrequest` in the demo namespace to see if any `RabbitMQopsrequest` object is created. After some time you'll see that a `RabbitMQopsrequest` will be created based on the recommendation. + +```bash +$ watch kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-mg-standalone-57huq2 VerticalScaling Progressing 10s +``` + +Let's wait for the ops request to become successful. + +```bash +$ watch kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-mg-standalone-57huq2 VerticalScaling Successful 68s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to scale the database. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-mg-standalone-57huq2 +Name: mops-mg-standalone-57huq2 +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2022-10-27T09:55:08Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:ownerReferences: + .: + k:{"uid":"439c148f-7c22-456f-a4b4-758cead29932"}: + f:spec: + .: + f:apply: + f:databaseRef: + f:timeout: + f:type: + f:verticalScaling: + .: + f:standalone: + .: + f:limits: + .: + f:cpu: + f:memory: + f:requests: + .: + f:cpu: + f:memory: + Manager: kubedb-autoscaler + Operation: Update + Time: 2022-10-27T09:55:08Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-10-27T09:55:33Z + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: RabbitMQAutoscaler + Name: mg-as + UID: 439c148f-7c22-456f-a4b4-758cead29932 + Resource Version: 656279 + UID: 29908a23-7cba-4f81-b787-3f9d226993f8 +Spec: + Apply: IfReady + Database Ref: + Name: mg-standalone + Timeout: 3m0s + Type: VerticalScaling + Vertical Scaling: + Standalone: + Limits: + Cpu: 400m + Memory: 400Mi + Requests: + Cpu: 400m + Memory: 400Mi +Status: + Conditions: + Last Transition Time: 2022-10-27T09:55:08Z + Message: RabbitMQ ops request is vertically scaling database + Observed Generation: 1 + Reason: VerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2022-10-27T09:55:33Z + Message: Successfully Vertically Scaled Standalone Resources + Observed Generation: 1 + Reason: UpdateStandaloneResources + Status: True + Type: UpdateStandaloneResources + Last Transition Time: 2022-10-27T09:55:33Z + Message: Successfully Vertically Scaled Database + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 2m40s KubeDB Ops-manager Operator Pausing RabbitMQ demo/mg-standalone + Normal PauseDatabase 2m40s KubeDB Ops-manager Operator Successfully paused RabbitMQ demo/mg-standalone + Normal Starting 2m40s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-standalone + Normal UpdateStandaloneResources 2m40s KubeDB Ops-manager Operator Successfully updated standalone Resources + Normal Starting 2m40s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-standalone + Normal UpdateStandaloneResources 2m40s KubeDB Ops-manager Operator Successfully updated standalone Resources + Normal UpdateStandaloneResources 2m15s KubeDB Ops-manager Operator Successfully Vertically Scaled Standalone Resources + Normal ResumeDatabase 2m15s KubeDB Ops-manager Operator Resuming RabbitMQ demo/mg-standalone + Normal ResumeDatabase 2m15s KubeDB Ops-manager Operator Successfully resumed RabbitMQ demo/mg-standalone + Normal Successful 2m15s KubeDB Ops-manager Operator Successfully Vertically Scaled Database +``` + +Now, we are going to verify from the Pod, and the RabbitMQ yaml whether the resources of the standalone database has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo mg-standalone-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "400m", + "memory": "400Mi" + }, + "requests": { + "cpu": "400m", + "memory": "400Mi" + } +} + +$ kubectl get RabbitMQ -n demo mg-standalone -o json | jq '.spec.podTemplate.spec.resources' +{ + "limits": { + "cpu": "400m", + "memory": "400Mi" + }, + "requests": { + "cpu": "400m", + "memory": "400Mi" + } +} +``` + + +The above output verifies that we have successfully auto scaled the resources of the RabbitMQ standalone database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-standalone +kubectl delete RabbitMQautoscaler -n demo mg-as +``` \ No newline at end of file diff --git a/docs/guides/rabbitmq/autoscaler/storage/_index.md b/docs/guides/rabbitmq/autoscaler/storage/_index.md new file mode 100644 index 0000000000..1e28090c06 --- /dev/null +++ b/docs/guides/rabbitmq/autoscaler/storage/_index.md @@ -0,0 +1,10 @@ +--- +title: Storage Autoscaling +menu: + docs_{{ .version }}: + identifier: mg-storage-auto-scaling + name: Storage Autoscaling + parent: mg-auto-scaling + weight: 46 +menu_name: docs_{{ .version }} +--- diff --git a/docs/guides/rabbitmq/autoscaler/storage/overview.md b/docs/guides/rabbitmq/autoscaler/storage/overview.md new file mode 100644 index 0000000000..60755c9bd8 --- /dev/null +++ b/docs/guides/rabbitmq/autoscaler/storage/overview.md @@ -0,0 +1,57 @@ +--- +title: RabbitMQ Storage Autoscaling Overview +menu: + docs_{{ .version }}: + identifier: mg-storage-auto-scaling-overview + name: Overview + parent: mg-storage-auto-scaling + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# RabbitMQ Vertical Autoscaling + +This guide will give an overview on how KubeDB Autoscaler operator autoscales the database storage using `RabbitMQautoscaler` crd. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [RabbitMQAutoscaler](/docs/guides/RabbitMQ/concepts/autoscaler.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + +## How Storage Autoscaling Works + +The following diagram shows how KubeDB Autoscaler operator autoscales the resources of `RabbitMQ` database components. Open the image in a new tab to see the enlarged version. + +
+  Storage Auto Scaling process of RabbitMQ +
Fig: Storage Auto Scaling process of RabbitMQ
+
+ + +The Auto Scaling process consists of the following steps: + +1. At first, a user creates a `RabbitMQ` Custom Resource (CR). + +2. `KubeDB` Provisioner operator watches the `RabbitMQ` CR. + +3. When the operator finds a `RabbitMQ` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +- Each StatefulSet creates a Persistent Volume according to the Volume Claim Template provided in the statefulset configuration. + +4. Then, in order to set up storage autoscaling of the various components (ie. ReplicaSet, Shard, ConfigServer etc.) of the `RabbitMQ` database the user creates a `RabbitMQAutoscaler` CRO with desired configuration. + +5. `KubeDB` Autoscaler operator watches the `RabbitMQAutoscaler` CRO. + +6. `KubeDB` Autoscaler operator continuously watches persistent volumes of the databases to check if it exceeds the specified usage threshold. +- If the usage exceeds the specified usage threshold, then `KubeDB` Autoscaler operator creates a `RabbitMQOpsRequest` to expand the storage of the database. + +7. `KubeDB` Ops-manager operator watches the `RabbitMQOpsRequest` CRO. + +8. Then the `KubeDB` Ops-manager operator will expand the storage of the database component as specified on the `RabbitMQOpsRequest` CRO. + +In the next docs, we are going to show a step by step guide on Autoscaling storage of various RabbitMQ database components using `RabbitMQAutoscaler` CRD. diff --git a/docs/guides/rabbitmq/autoscaler/storage/replicaset.md b/docs/guides/rabbitmq/autoscaler/storage/replicaset.md new file mode 100644 index 0000000000..63d89a7304 --- /dev/null +++ b/docs/guides/rabbitmq/autoscaler/storage/replicaset.md @@ -0,0 +1,387 @@ +--- +title: RabbitMQ Replicaset Autoscaling +menu: + docs_{{ .version }}: + identifier: mg-storage-auto-scaling-replicaset + name: ReplicaSet + parent: mg-storage-auto-scaling + weight: 20 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Storage Autoscaling of a RabbitMQ Replicaset Database + +This guide will show you how to use `KubeDB` to autoscale the storage of a RabbitMQ Replicaset database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner, Ops-manager and Autoscaler operator in your cluster following the steps [here](/docs/setup/README.md). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- Install Prometheus from [here](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) + +- You must have a `StorageClass` that supports volume expansion. + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [RabbitMQAutoscaler](/docs/guides/RabbitMQ/concepts/autoscaler.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + - [Storage Autoscaling Overview](/docs/guides/RabbitMQ/autoscaler/storage/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](/docs/examples/RabbitMQ) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Storage Autoscaling of ReplicaSet Database + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 9h +topolvm-provisioner topolvm.cybozu.com Delete WaitForFirstConsumer true 9h +``` + +We can see from the output the `topolvm-provisioner` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. You can install topolvm from [here](https://github.com/topolvm/topolvm) + +Now, we are going to deploy a `RabbitMQ` replicaset using a supported version by `KubeDB` operator. Then we are going to apply `RabbitMQAutoscaler` to set up autoscaling. + +#### Deploy RabbitMQ replicaset + +In this section, we are going to deploy a RabbitMQ replicaset database with version `4.4.26`. Then, in the next section we will set up autoscaling for this database using `RabbitMQAutoscaler` CRD. Below is the YAML of the `RabbitMQ` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mg-rs + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "replicaset" + replicas: 3 + storageType: Durable + storage: + storageClassName: topolvm-provisioner + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `RabbitMQ` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/autoscaling/storage/mg-rs.yaml +RabbitMQ.kubedb.com/mg-rs created +``` + +Now, wait until `mg-rs` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-rs 4.4.26 Ready 2m53s +``` + +Let's check volume size from statefulset, and from the persistent volume, + +```bash +$ kubectl get sts -n demo mg-rs -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-b16daa50-83fc-4d25-b553-4a25f13166d5 1Gi RWO Delete Bound demo/datadir-mg-rs-0 topolvm-provisioner 2m12s +pvc-d4616bef-359d-4b73-ab9f-38c24aaaec8c 1Gi RWO Delete Bound demo/datadir-mg-rs-1 topolvm-provisioner 61s +pvc-ead21204-3dc7-453c-8121-d2fe48b1c3e2 1Gi RWO Delete Bound demo/datadir-mg-rs-2 topolvm-provisioner 18s +``` + +You can see the statefulset has 1GB storage, and the capacity of all the persistent volume is also 1GB. + +We are now ready to apply the `RabbitMQAutoscaler` CRO to set up storage autoscaling for this database. + +### Storage Autoscaling + +Here, we are going to set up storage autoscaling using a RabbitMQAutoscaler Object. + +#### Create RabbitMQAutoscaler Object + +In order to set up vertical autoscaling for this replicaset database, we have to create a `RabbitMQAutoscaler` CRO with our desired configuration. Below is the YAML of the `RabbitMQAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: RabbitMQAutoscaler +metadata: + name: mg-as-rs + namespace: demo +spec: + databaseRef: + name: mg-rs + storage: + replicaSet: + expansionMode: "Online" + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `mg-rs` database. +- `spec.storage.replicaSet.trigger` specifies that storage autoscaling is enabled for this database. +- `spec.storage.replicaSet.usageThreshold` specifies storage usage threshold, if storage usage exceeds `60%` then storage autoscaling will be triggered. +- `spec.storage.replicaSet.scalingThreshold` specifies the scaling threshold. Storage will be scaled to `50%` of the current amount. +- It has another field `spec.storage.replicaSet.expansionMode` to set the opsRequest volumeExpansionMode, which support two values: `Online` & `Offline`. Default value is `Online`. + +Let's create the `RabbitMQAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/autoscaling/storage/mg-as-rs.yaml +RabbitMQautoscaler.autoscaling.kubedb.com/mg-as-rs created +``` + +#### Storage Autoscaling is set up successfully + +Let's check that the `RabbitMQautoscaler` resource is created successfully, + +```bash +$ kubectl get RabbitMQautoscaler -n demo +NAME AGE +mg-as-rs 20s + +$ kubectl describe RabbitMQautoscaler mg-as-rs -n demo +Name: mg-as-rs +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: RabbitMQAutoscaler +Metadata: + Creation Timestamp: 2021-03-08T14:11:46Z + Generation: 1 + Managed Fields: + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:storage: + .: + f:replicaSet: + .: + f:scalingThreshold: + f:trigger: + f:usageThreshold: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-08T14:11:46Z + Resource Version: 152149 + Self Link: /apis/autoscaling.kubedb.com/v1alpha1/namespaces/demo/RabbitMQautoscalers/mg-as-rs + UID: a0dab64d-e7c4-4819-8ffe-360c70231577 +Spec: + Database Ref: + Name: mg-rs + Storage: + Replica Set: + Scaling Threshold: 50 + Trigger: On + Usage Threshold: 60 +Events: +``` +So, the `RabbitMQautoscaler` resource is created successfully. + +Now, for this demo, we are going to manually fill up the persistent volume to exceed the `usageThreshold` using `dd` command to see if storage autoscaling is working or not. + +Let's exec into the database pod and fill the database volume using the following commands: + +```bash +$ kubectl exec -it -n demo mg-rs-0 -- bash +root@mg-rs-0:/# df -h /data/db +Filesystem Size Used Avail Use% Mounted on +/dev/topolvm/760cb655-91fe-4497-ab4a-a771aa53ece4 1014M 335M 680M 33% /data/db +root@mg-rs-0:/# dd if=/dev/zero of=/data/db/file.img bs=500M count=1 +1+0 records in +1+0 records out +524288000 bytes (524 MB, 500 MiB) copied, 0.482378 s, 1.1 GB/s +root@mg-rs-0:/# df -h /data/db +Filesystem Size Used Avail Use% Mounted on +/dev/topolvm/760cb655-91fe-4497-ab4a-a771aa53ece4 1014M 835M 180M 83% /data/db +``` + +So, from the above output we can see that the storage usage is 83%, which exceeded the `usageThreshold` 60%. + +Let's watch the `RabbitMQopsrequest` in the demo namespace to see if any `RabbitMQopsrequest` object is created. After some time you'll see that a `RabbitMQopsrequest` of type `VolumeExpansion` will be created based on the `scalingThreshold`. + +```bash +$ watch kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-mg-rs-mft11m VolumeExpansion Progressing 10s +``` + +Let's wait for the ops request to become successful. + +```bash +$ watch kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-mg-rs-mft11m VolumeExpansion Successful 97s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-mg-rs-mft11m +Name: mops-mg-rs-mft11m +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mg-rs + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=RabbitMQs.kubedb.com +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2021-03-08T14:15:52Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:labels: + .: + f:app.kubernetes.io/component: + f:app.kubernetes.io/instance: + f:app.kubernetes.io/managed-by: + f:app.kubernetes.io/name: + f:ownerReferences: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:type: + f:volumeExpansion: + .: + f:replicaSet: + Manager: kubedb-autoscaler + Operation: Update + Time: 2021-03-08T14:15:52Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-08T14:15:52Z + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: RabbitMQAutoscaler + Name: mg-as-rs + UID: a0dab64d-e7c4-4819-8ffe-360c70231577 + Resource Version: 153496 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/RabbitMQopsrequests/mops-mg-rs-mft11m + UID: 84567b84-6de4-4658-b0d2-2c374e03e63d +Spec: + Database Ref: + Name: mg-rs + Type: VolumeExpansion + Volume Expansion: + Replica Set: 1594884096 +Status: + Conditions: + Last Transition Time: 2021-03-08T14:15:52Z + Message: RabbitMQ ops request is expanding volume of database + Observed Generation: 1 + Reason: VolumeExpansion + Status: True + Type: VolumeExpansion + Last Transition Time: 2021-03-08T14:17:02Z + Message: Successfully Expanded Volume + Observed Generation: 1 + Reason: ReplicasetVolumeExpansion + Status: True + Type: ReplicasetVolumeExpansion + Last Transition Time: 2021-03-08T14:17:07Z + Message: Successfully Expanded Volume + Observed Generation: 1 + Reason: + Status: True + Type: + Last Transition Time: 2021-03-08T14:17:12Z + Message: StatefulSet is recreated + Observed Generation: 1 + Reason: ReadyStatefulSets + Status: True + Type: ReadyStatefulSets + Last Transition Time: 2021-03-08T14:17:12Z + Message: Successfully Expanded Volume + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 2m36s KubeDB Ops-manager operator Pausing RabbitMQ demo/mg-rs + Normal PauseDatabase 2m36s KubeDB Ops-manager operator Successfully paused RabbitMQ demo/mg-rs + Normal ReplicasetVolumeExpansion 86s KubeDB Ops-manager operator Successfully Expanded Volume + Normal 81s KubeDB Ops-manager operator Successfully Expanded Volume + Normal ResumeDatabase 81s KubeDB Ops-manager operator Resuming RabbitMQ demo/mg-rs + Normal ResumeDatabase 81s KubeDB Ops-manager operator Successfully resumed RabbitMQ demo/mg-rs + Normal ReadyStatefulSets 76s KubeDB Ops-manager operator StatefulSet is recreated + Normal Successful 76s KubeDB Ops-manager operator Successfully Expanded Volume +``` + +Now, we are going to verify from the `Statefulset`, and the `Persistent Volume` whether the volume of the replicaset database has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get sts -n demo mg-rs -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1594884096" +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-b16daa50-83fc-4d25-b553-4a25f13166d5 2Gi RWO Delete Bound demo/datadir-mg-rs-0 topolvm-provisioner 11m +pvc-d4616bef-359d-4b73-ab9f-38c24aaaec8c 2Gi RWO Delete Bound demo/datadir-mg-rs-1 topolvm-provisioner 10m +pvc-ead21204-3dc7-453c-8121-d2fe48b1c3e2 2Gi RWO Delete Bound demo/datadir-mg-rs-2 topolvm-provisioner 9m52s +``` + +The above output verifies that we have successfully autoscaled the volume of the RabbitMQ replicaset database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-rs +kubectl delete RabbitMQautoscaler -n demo mg-as-rs +``` diff --git a/docs/guides/rabbitmq/autoscaler/storage/sharding.md b/docs/guides/rabbitmq/autoscaler/storage/sharding.md new file mode 100644 index 0000000000..76e8aec34c --- /dev/null +++ b/docs/guides/rabbitmq/autoscaler/storage/sharding.md @@ -0,0 +1,412 @@ +--- +title: RabbitMQ Shard Autoscaling +menu: + docs_{{ .version }}: + identifier: mg-storage-auto-scaling-shard + name: Sharding + parent: mg-storage-auto-scaling + weight: 25 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Storage Autoscaling of a RabbitMQ Sharded Database + +This guide will show you how to use `KubeDB` to autoscale the storage of a RabbitMQ Sharded database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner, Ops-manager and Autoscaler operator in your cluster following the steps [here](/docs/setup/README.md). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- Install Prometheus from [here](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) + +- You must have a `StorageClass` that supports volume expansion. + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [RabbitMQAutoscaler](/docs/guides/RabbitMQ/concepts/autoscaler.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + - [Storage Autoscaling Overview](/docs/guides/RabbitMQ/autoscaler/storage/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](/docs/examples/RabbitMQ) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Storage Autoscaling of Sharded Database + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 9h +topolvm-provisioner topolvm.cybozu.com Delete WaitForFirstConsumer true 9h +``` + +We can see from the output the `topolvm-provisioner` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. You can install topolvm from [here](https://github.com/topolvm/topolvm) + +Now, we are going to deploy a `RabbitMQ` sharded database using a supported version by `KubeDB` operator. Then we are going to apply `RabbitMQAutoscaler` to set up autoscaling. + +#### Deploy RabbitMQ Sharded Database + +In this section, we are going to deploy a RabbitMQ sharded database with version `4.4.26`. Then, in the next section we will set up autoscaling for this database using `RabbitMQAutoscaler` CRD. Below is the YAML of the `RabbitMQ` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mg-sh + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + shardTopology: + configServer: + storage: + storageClassName: topolvm-provisioner + resources: + requests: + storage: 1Gi + replicas: 3 + mongos: + replicas: 2 + shard: + storage: + storageClassName: topolvm-provisioner + resources: + requests: + storage: 1Gi + replicas: 3 + shards: 2 + terminationPolicy: WipeOut +``` + +Let's create the `RabbitMQ` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/autoscaling/storage/mg-sh.yaml +RabbitMQ.kubedb.com/mg-sh created +``` + +Now, wait until `mg-sh` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-sh 4.4.26 Ready 3m51s +``` + +Let's check volume size from one of the shard statefulset, and from the persistent volume, + +```bash +$ kubectl get sts -n demo mg-sh-shard0 -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-031836c6-95ae-4015-938c-da183c205828 1Gi RWO Delete Bound demo/datadir-mg-sh-configsvr-0 topolvm-provisioner 5m1s +pvc-2515233f-0f7d-4d0d-8b45-97a3cb9d4488 1Gi RWO Delete Bound demo/datadir-mg-sh-shard0-2 topolvm-provisioner 3m44s +pvc-35f73708-3c11-4ead-a60b-e1679a294b81 1Gi RWO Delete Bound demo/datadir-mg-sh-shard0-0 topolvm-provisioner 5m +pvc-4b329feb-8c92-4605-a37e-c02b3499e311 1Gi RWO Delete Bound demo/datadir-mg-sh-configsvr-2 topolvm-provisioner 3m55s +pvc-52490270-1355-4045-b2a1-872a671ab006 1Gi RWO Delete Bound demo/datadir-mg-sh-configsvr-1 topolvm-provisioner 4m28s +pvc-80dc91d3-f56f-4037-b6e1-f69e13fb434c 1Gi RWO Delete Bound demo/datadir-mg-sh-shard1-1 topolvm-provisioner 4m26s +pvc-c1965a32-7471-4885-ac52-f9eab056d48e 1Gi RWO Delete Bound demo/datadir-mg-sh-shard1-2 topolvm-provisioner 3m57s +pvc-c838a27d-c75d-4caa-9c1d-456af3bfaba0 1Gi RWO Delete Bound demo/datadir-mg-sh-shard1-0 topolvm-provisioner 4m59s +pvc-d47f19be-f206-41c5-a0b1-5022776fea2f 1Gi RWO Delete Bound demo/datadir-mg-sh-shard0-1 topolvm-provisioner 4m25s +``` + +You can see the statefulset has 1GB storage, and the capacity of all the persistent volume is also 1GB. + +We are now ready to apply the `RabbitMQAutoscaler` CRO to set up storage autoscaling for this database. + +### Storage Autoscaling + +Here, we are going to set up storage autoscaling using a RabbitMQAutoscaler Object. + +#### Create RabbitMQAutoscaler Object + +In order to set up vertical autoscaling for this sharded database, we have to create a `RabbitMQAutoscaler` CRO with our desired configuration. Below is the YAML of the `RabbitMQAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: RabbitMQAutoscaler +metadata: + name: mg-as-sh + namespace: demo +spec: + databaseRef: + name: mg-sh + storage: + shard: + expansionMode: "Online" + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `mg-sh` database. +- `spec.storage.shard.trigger` specifies that storage autoscaling is enabled for this database. +- `spec.storage.shard.usageThreshold` specifies storage usage threshold, if storage usage exceeds `60%` then storage autoscaling will be triggered. +- `spec.storage.shard.scalingThreshold` specifies the scaling threshold. Storage will be scaled to `50%` of the current amount. +- It has another field `spec.storage.replicaSet.expansionMode` to set the opsRequest volumeExpansionMode, which support two values: `Online` & `Offline`. Default value is `Online`. + +> Note: In this demo we are only setting up the storage autoscaling for the shard pods, that's why we only specified the shard section of the autoscaler. You can enable autoscaling for configServer pods in the same yaml, by specifying the `spec.configServer` section, similar to the `spec.shard` section we have configured in this demo. + + +Let's create the `RabbitMQAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/autoscaling/storage/mg-as-sh.yaml +RabbitMQautoscaler.autoscaling.kubedb.com/mg-as-sh created +``` + +#### Storage Autoscaling is set up successfully + +Let's check that the `RabbitMQautoscaler` resource is created successfully, + +```bash +$ kubectl get RabbitMQautoscaler -n demo +NAME AGE +mg-as-sh 20s + +$ kubectl describe RabbitMQautoscaler mg-as-sh -n demo +Name: mg-as-sh +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: RabbitMQAutoscaler +Metadata: + Creation Timestamp: 2021-03-08T14:26:06Z + Generation: 1 + Managed Fields: + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:storage: + .: + f:shard: + .: + f:scalingThreshold: + f:trigger: + f:usageThreshold: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-08T14:26:06Z + Resource Version: 156292 + Self Link: /apis/autoscaling.kubedb.com/v1alpha1/namespaces/demo/RabbitMQautoscalers/mg-as-sh + UID: 203e332f-bdfe-470f-a429-a7b60c7be2ee +Spec: + Database Ref: + Name: mg-sh + Storage: + Shard: + Scaling Threshold: 50 + Trigger: On + Usage Threshold: 60 +Events: +``` +So, the `RabbitMQautoscaler` resource is created successfully. + +Now, for this demo, we are going to manually fill up one of the persistent volume to exceed the `usageThreshold` using `dd` command to see if storage autoscaling is working or not. + +Let's exec into the database pod and fill the database volume using the following commands: + +```bash +$ kubectl exec -it -n demo mg-sh-shard0-0 -- bash +root@mg-sh-shard0-0:/# df -h /data/db +Filesystem Size Used Avail Use% Mounted on +/dev/topolvm/ad11042f-f4cc-4dfc-9680-2afbbb199d48 1014M 335M 680M 34% /data/db +root@mg-sh-shard0-0:/# dd if=/dev/zero of=/data/db/file.img bs=500M count=1 +1+0 records in +1+0 records out +524288000 bytes (524 MB, 500 MiB) copied, 0.595358 s, 881 MB/s +root@mg-sh-shard0-0:/# df -h /data/db +Filesystem Size Used Avail Use% Mounted on +/dev/topolvm/ad11042f-f4cc-4dfc-9680-2afbbb199d48 1014M 837M 178M 83% /data/db +``` + +So, from the above output we can see that the storage usage is 83%, which exceeded the `usageThreshold` 60%. + +Let's watch the `RabbitMQopsrequest` in the demo namespace to see if any `RabbitMQopsrequest` object is created. After some time you'll see that a `RabbitMQopsrequest` of type `VolumeExpansion` will be created based on the `scalingThreshold`. + +```bash +$ watch kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-mg-sh-ba5ikn VolumeExpansion Progressing 41s +``` + +Let's wait for the ops request to become successful. + +```bash +$ watch kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-mg-sh-ba5ikn VolumeExpansion Successful 2m54s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-mg-sh-ba5ikn +Name: mops-mg-sh-ba5ikn +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mg-sh + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=RabbitMQs.kubedb.com +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2021-03-08T14:31:52Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:labels: + .: + f:app.kubernetes.io/component: + f:app.kubernetes.io/instance: + f:app.kubernetes.io/managed-by: + f:app.kubernetes.io/name: + f:ownerReferences: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:type: + f:volumeExpansion: + .: + f:shard: + Manager: kubedb-autoscaler + Operation: Update + Time: 2021-03-08T14:31:52Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-08T14:31:52Z + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: RabbitMQAutoscaler + Name: mg-as-sh + UID: 203e332f-bdfe-470f-a429-a7b60c7be2ee + Resource Version: 158488 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/RabbitMQopsrequests/mops-mg-sh-ba5ikn + UID: c56236c2-5b64-4775-ba5a-35727b96a414 +Spec: + Database Ref: + Name: mg-sh + Type: VolumeExpansion + Volume Expansion: + Shard: 1594884096 +Status: + Conditions: + Last Transition Time: 2021-03-08T14:31:52Z + Message: RabbitMQ ops request is expanding volume of database + Observed Generation: 1 + Reason: VolumeExpansion + Status: True + Type: VolumeExpansion + Last Transition Time: 2021-03-08T14:34:32Z + Message: Successfully Expanded Volume + Observed Generation: 1 + Reason: ShardVolumeExpansion + Status: True + Type: ShardVolumeExpansion + Last Transition Time: 2021-03-08T14:34:37Z + Message: Successfully Expanded Volume + Observed Generation: 1 + Reason: + Status: True + Type: + Last Transition Time: 2021-03-08T14:34:42Z + Message: StatefulSet is recreated + Observed Generation: 1 + Reason: ReadyStatefulSets + Status: True + Type: ReadyStatefulSets + Last Transition Time: 2021-03-08T14:34:42Z + Message: Successfully Expanded Volume + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 3m21s KubeDB Ops-manager operator Pausing RabbitMQ demo/mg-sh + Normal PauseDatabase 3m21s KubeDB Ops-manager operator Successfully paused RabbitMQ demo/mg-sh + Normal ShardVolumeExpansion 41s KubeDB Ops-manager operator Successfully Expanded Volume + Normal 36s KubeDB Ops-manager operator Successfully Expanded Volume + Normal ResumeDatabase 36s KubeDB Ops-manager operator Resuming RabbitMQ demo/mg-sh + Normal ResumeDatabase 36s KubeDB Ops-manager operator Successfully resumed RabbitMQ demo/mg-sh + Normal ReadyStatefulSets 31s KubeDB Ops-manager operator StatefulSet is recreated + Normal Successful 31s KubeDB Ops-manager operator Successfully Expanded Volume +``` + +Now, we are going to verify from the `Statefulset`, and the `Persistent Volume` whether the volume of the shard nodes of the database has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get sts -n demo mg-sh-shard0 -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1594884096" +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-031836c6-95ae-4015-938c-da183c205828 1Gi RWO Delete Bound demo/datadir-mg-sh-configsvr-0 topolvm-provisioner 13m +pvc-2515233f-0f7d-4d0d-8b45-97a3cb9d4488 2Gi RWO Delete Bound demo/datadir-mg-sh-shard0-2 topolvm-provisioner 11m +pvc-35f73708-3c11-4ead-a60b-e1679a294b81 2Gi RWO Delete Bound demo/datadir-mg-sh-shard0-0 topolvm-provisioner 13m +pvc-4b329feb-8c92-4605-a37e-c02b3499e311 1Gi RWO Delete Bound demo/datadir-mg-sh-configsvr-2 topolvm-provisioner 11m +pvc-52490270-1355-4045-b2a1-872a671ab006 1Gi RWO Delete Bound demo/datadir-mg-sh-configsvr-1 topolvm-provisioner 12m +pvc-80dc91d3-f56f-4037-b6e1-f69e13fb434c 2Gi RWO Delete Bound demo/datadir-mg-sh-shard1-1 topolvm-provisioner 12m +pvc-c1965a32-7471-4885-ac52-f9eab056d48e 2Gi RWO Delete Bound demo/datadir-mg-sh-shard1-2 topolvm-provisioner 11m +pvc-c838a27d-c75d-4caa-9c1d-456af3bfaba0 2Gi RWO Delete Bound demo/datadir-mg-sh-shard1-0 topolvm-provisioner 12m +pvc-d47f19be-f206-41c5-a0b1-5022776fea2f 2Gi RWO Delete Bound demo/datadir-mg-sh-shard0-1 topolvm-provisioner 12m +``` + +The above output verifies that we have successfully autoscaled the volume of the shard nodes of this RabbitMQ database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-sh +kubectl delete RabbitMQautoscaler -n demo mg-as-sh +``` diff --git a/docs/guides/rabbitmq/autoscaler/storage/standalone.md b/docs/guides/rabbitmq/autoscaler/storage/standalone.md new file mode 100644 index 0000000000..fd375b288d --- /dev/null +++ b/docs/guides/rabbitmq/autoscaler/storage/standalone.md @@ -0,0 +1,380 @@ +--- +title: RabbitMQ Standalone Autoscaling +menu: + docs_{{ .version }}: + identifier: mg-storage-auto-scaling-standalone + name: Standalone + parent: mg-storage-auto-scaling + weight: 15 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Storage Autoscaling of a RabbitMQ Standalone Database + +This guide will show you how to use `KubeDB` to autoscale the storage of a RabbitMQ standalone database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner, Ops-manager and Autoscaler operator in your cluster following the steps [here](/docs/setup/README.md). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- Install Prometheus from [here](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) + +- You must have a `StorageClass` that supports volume expansion. + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [RabbitMQAutoscaler](/docs/guides/RabbitMQ/concepts/autoscaler.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + - [Storage Autoscaling Overview](/docs/guides/RabbitMQ/autoscaler/storage/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](/docs/examples/RabbitMQ) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Storage Autoscaling of Standalone Database + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 9h +topolvm-provisioner topolvm.cybozu.com Delete WaitForFirstConsumer true 9h +``` + +We can see from the output the `topolvm-provisioner` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. You can install topolvm from [here](https://github.com/topolvm/topolvm) + +Now, we are going to deploy a `RabbitMQ` standalone using a supported version by `KubeDB` operator. Then we are going to apply `RabbitMQAutoscaler` to set up autoscaling. + +#### Deploy RabbitMQ standalone + +In this section, we are going to deploy a RabbitMQ standalone database with version `4.4.26`. Then, in the next section we will set up autoscaling for this database using `RabbitMQAutoscaler` CRD. Below is the YAML of the `RabbitMQ` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mg-standalone + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: topolvm-provisioner + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut +``` + +Let's create the `RabbitMQ` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/autoscaling/storage/mg-standalone.yaml +RabbitMQ.kubedb.com/mg-standalone created +``` + +Now, wait until `mg-standalone` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-standalone 4.4.26 Ready 2m53s +``` + +Let's check volume size from statefulset, and from the persistent volume, + +```bash +$ kubectl get sts -n demo mg-standalone -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-cf469ed8-a89a-49ca-bf7c-8c76b7889428 1Gi RWO Delete Bound demo/datadir-mg-standalone-0 topolvm-provisioner 7m41s +``` + +You can see the statefulset has 1GB storage, and the capacity of the persistent volume is also 1GB. + +We are now ready to apply the `RabbitMQAutoscaler` CRO to set up storage autoscaling for this database. + +### Storage Autoscaling + +Here, we are going to set up storage autoscaling using a RabbitMQAutoscaler Object. + +#### Create RabbitMQAutoscaler Object + +In order to set up vertical autoscaling for this standalone database, we have to create a `RabbitMQAutoscaler` CRO with our desired configuration. Below is the YAML of the `RabbitMQAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: RabbitMQAutoscaler +metadata: + name: mg-as + namespace: demo +spec: + databaseRef: + name: mg-standalone + storage: + standalone: + expansionMode: "Online" + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `mg-standalone` database. +- `spec.storage.standalone.trigger` specifies that storage autoscaling is enabled for this database. +- `spec.storage.standalone.usageThreshold` specifies storage usage threshold, if storage usage exceeds `60%` then storage autoscaling will be triggered. +- `spec.storage.standalone.scalingThreshold` specifies the scaling threshold. Storage will be scaled to `50%` of the current amount. +- It has another field `spec.storage.replicaSet.expansionMode` to set the opsRequest volumeExpansionMode, which support two values: `Online` & `Offline`. Default value is `Online`. + +Let's create the `RabbitMQAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/autoscaling/storage/mg-as-standalone.yaml +RabbitMQautoscaler.autoscaling.kubedb.com/mg-as created +``` + +#### Storage Autoscaling is set up successfully + +Let's check that the `RabbitMQautoscaler` resource is created successfully, + +```bash +$ kubectl get RabbitMQautoscaler -n demo +NAME AGE +mg-as 102s + +$ kubectl describe RabbitMQautoscaler mg-as -n demo +Name: mg-as +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: RabbitMQAutoscaler +Metadata: + Creation Timestamp: 2021-03-08T12:58:01Z + Generation: 1 + Managed Fields: + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:storage: + .: + f:standalone: + .: + f:scalingThreshold: + f:trigger: + f:usageThreshold: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-08T12:58:01Z + Resource Version: 134423 + Self Link: /apis/autoscaling.kubedb.com/v1alpha1/namespaces/demo/RabbitMQautoscalers/mg-as + UID: 999a2dc9-7eb7-4ed2-9e90-d3f8b21c091a +Spec: + Database Ref: + Name: mg-standalone + Storage: + Standalone: + Scaling Threshold: 50 + Trigger: On + Usage Threshold: 60 +Events: +``` +So, the `RabbitMQautoscaler` resource is created successfully. + +Now, for this demo, we are going to manually fill up the persistent volume to exceed the `usageThreshold` using `dd` command to see if storage autoscaling is working or not. + +Let's exec into the database pod and fill the database volume using the following commands: + +```bash +$ kubectl exec -it -n demo mg-standalone-0 -- bash +root@mg-standalone-0:/# df -h /data/db +Filesystem Size Used Avail Use% Mounted on +/dev/topolvm/1df4ee9e-b900-4c0f-9d2c-8493fb30bdc0 1014M 334M 681M 33% /data/db +root@mg-standalone-0:/# dd if=/dev/zero of=/data/db/file.img bs=500M count=1 +1+0 records in +1+0 records out +524288000 bytes (524 MB, 500 MiB) copied, 0.359202 s, 1.5 GB/s +root@mg-standalone-0:/# df -h /data/db +Filesystem Size Used Avail Use% Mounted on +/dev/topolvm/1df4ee9e-b900-4c0f-9d2c-8493fb30bdc0 1014M 835M 180M 83% /data/db +``` + +So, from the above output we can see that the storage usage is 84%, which exceeded the `usageThreshold` 60%. + +Let's watch the `RabbitMQopsrequest` in the demo namespace to see if any `RabbitMQopsrequest` object is created. After some time you'll see that a `RabbitMQopsrequest` of type `VolumeExpansion` will be created based on the `scalingThreshold`. + +```bash +$ watch kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-mg-standalone-p27c11 VolumeExpansion Progressing 26s +``` + +Let's wait for the ops request to become successful. + +```bash +$ watch kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-mg-standalone-p27c11 VolumeExpansion Successful 73s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-mg-standalone-p27c11 +Name: mops-mg-standalone-p27c11 +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mg-standalone + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=RabbitMQs.kubedb.com +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2021-03-08T13:19:51Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:labels: + .: + f:app.kubernetes.io/component: + f:app.kubernetes.io/instance: + f:app.kubernetes.io/managed-by: + f:app.kubernetes.io/name: + f:ownerReferences: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:type: + f:volumeExpansion: + .: + f:standalone: + Manager: kubedb-autoscaler + Operation: Update + Time: 2021-03-08T13:19:51Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-08T13:19:52Z + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: RabbitMQAutoscaler + Name: mg-as + UID: 999a2dc9-7eb7-4ed2-9e90-d3f8b21c091a + Resource Version: 139871 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/RabbitMQopsrequests/mops-mg-standalone-p27c11 + UID: 9606485d-9dd8-4787-9c7c-61fc874c555e +Spec: + Database Ref: + Name: mg-standalone + Type: VolumeExpansion + Volume Expansion: + Standalone: 1594884096 +Status: + Conditions: + Last Transition Time: 2021-03-08T13:19:52Z + Message: RabbitMQ ops request is expanding volume of database + Observed Generation: 1 + Reason: VolumeExpansion + Status: True + Type: VolumeExpansion + Last Transition Time: 2021-03-08T13:20:47Z + Message: Successfully Expanded Volume + Observed Generation: 1 + Reason: StandaloneVolumeExpansion + Status: True + Type: StandaloneVolumeExpansion + Last Transition Time: 2021-03-08T13:20:52Z + Message: Successfully Expanded Volume + Observed Generation: 1 + Reason: + Status: True + Type: + Last Transition Time: 2021-03-08T13:20:57Z + Message: StatefulSet is recreated + Observed Generation: 1 + Reason: ReadyStatefulSets + Status: True + Type: ReadyStatefulSets + Last Transition Time: 2021-03-08T13:20:57Z + Message: Successfully Expanded Volume + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 110s KubeDB Ops-manager operator Pausing RabbitMQ demo/mg-standalone + Normal PauseDatabase 110s KubeDB Ops-manager operator Successfully paused RabbitMQ demo/mg-standalone + Normal StandaloneVolumeExpansion 55s KubeDB Ops-manager operator Successfully Expanded Volume + Normal 50s KubeDB Ops-manager operator Successfully Expanded Volume + Normal ResumeDatabase 50s KubeDB Ops-manager operator Resuming RabbitMQ demo/mg-standalone + Normal ResumeDatabase 50s KubeDB Ops-manager operator Successfully resumed RabbitMQ demo/mg-standalone + Normal ReadyStatefulSets 45s KubeDB Ops-manager operator StatefulSet is recreated + Normal Successful 45s KubeDB Ops-manager operator Successfully Expanded Volume +``` + +Now, we are going to verify from the `Statefulset`, and the `Persistent Volume` whether the volume of the standalone database has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get sts -n demo mg-standalone -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1594884096" +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-cf469ed8-a89a-49ca-bf7c-8c76b7889428 2Gi RWO Delete Bound demo/datadir-mg-standalone-0 topolvm-provisioner 26m +``` + +The above output verifies that we have successfully autoscaled the volume of the RabbitMQ standalone database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-standalone +kubectl delete RabbitMQautoscaler -n demo mg-as +``` diff --git a/docs/guides/rabbitmq/concepts/_index.md b/docs/guides/rabbitmq/concepts/_index.md new file mode 100755 index 0000000000..c7018e7ceb --- /dev/null +++ b/docs/guides/rabbitmq/concepts/_index.md @@ -0,0 +1,10 @@ +--- +title: RabbitMQ Concepts +menu: + docs_{{ .version }}: + identifier: rm-concepts-guides + name: Concepts + parent: rm-guides + weight: 20 +menu_name: docs_{{ .version }} +--- diff --git a/docs/guides/rabbitmq/concepts/appbinding.md b/docs/guides/rabbitmq/concepts/appbinding.md new file mode 100644 index 0000000000..ec49c19f59 --- /dev/null +++ b/docs/guides/rabbitmq/concepts/appbinding.md @@ -0,0 +1,161 @@ +--- +title: AppBinding CRD +menu: + docs_{{ .version }}: + identifier: rm-appbinding-concepts + name: AppBinding + parent: rm-concepts-guides + weight: 20 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# AppBinding + +## What is AppBinding + +An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please go through this blog post: [The case for AppBinding](https://appscode.com/blog/post/the-case-for-appbinding). + +If you deploy a database using [KubeDB](https://kubedb.com/docs/latest/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. + +## AppBinding CRD Specification + +Like any official Kubernetes resource, an `AppBinding` has `TypeMeta`, `ObjectMeta` and `Spec` sections. However, unlike other Kubernetes resources, it does not have a `Status` section. + +An `AppBinding` object created by `KubeDB` for RabbitMQ is shown below, + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"RabbitMQ","metadata":{"annotations":{},"name":"rabbitmq","namespace":"demo"},"spec":{"deletionPolicy":"WipeOut","replicas":3,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"storageType":"Durable","version":"3.13.2"}} + creationTimestamp: "2024-07-09T10:16:01Z" + generation: 1 + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: rabbitmq + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: rabbitmqs.kubedb.com + name: rabbitmq + namespace: demo + ownerReferences: + - apiVersion: kubedb.com/v1alpha2 + blockOwnerDeletion: true + controller: true + kind: RabbitMQ + name: rabbitmq + uid: a8c4e284-e9ec-41d1-ad5e-646e3209d3bf + resourceVersion: "289308" + uid: 55b11491-9c3b-4aaf-93a3-17deb62593f0 +spec: + appRef: + apiGroup: kubedb.com + kind: RabbitMQ + name: rabbitmq + namespace: demo + clientConfig: + service: + name: rabbitmq + port: 5672 + scheme: http + secret: + name: rabbitmq-admin-cred + type: kubedb.com/rabbitmq + version: 3.13.2 +``` + +Here, we are going to describe the sections of an `AppBinding` crd. + +### AppBinding `Spec` + +An `AppBinding` object has the following fields in the `spec` section: + +#### spec.type + +`spec.type` is an optional field that indicates the type of the app that this `AppBinding` is pointing to. + +This field follows the following format: `/`. The above AppBinding is pointing to a `rabbitmq` resource under `kubedb.com` group. + +Here, the variables are parsed as follows: + +| Variable | Usage | +| --------------------- |-----------------------------------------------------------------------------------------------------------------------------------| +| `TARGET_APP_GROUP` | Represents the application group where the respective app belongs (i.e: `kubedb.com`). | +| `TARGET_APP_RESOURCE` | Represents the resource under that application group that this appbinding represents (i.e: `rabbitmq`). | +| `TARGET_APP_TYPE` | Represents the complete type of the application. It's simply `TARGET_APP_GROUP/TARGET_APP_RESOURCE` (i.e: `kubedb.com/rabbitmq`). | + +#### spec.secret + +`spec.secret` specifies the name of the secret which contains the credentials that are required to access the database. This secret must be in the same namespace as the `AppBinding`. + +This secret must contain the following keys: + +RabbitMQ : + +| Key | Usage | +| ---------- | ---------------------------------------------- | +| `username` | Username of the target database. | +| `password` | Password for the user specified by `username`. | + +PostgreSQL : + +| Key | Usage | +| ------------------- | --------------------------------------------------- | +| `POSTGRES_USER` | Username of the target database. | +| `POSTGRES_PASSWORD` | Password for the user specified by `POSTGRES_USER`. | + +MySQL : + +| Key | Usage | +| ---------- | ---------------------------------------------- | +| `username` | Username of the target database. | +| `password` | Password for the user specified by `username`. | + + +Elasticsearch: + +| Key | Usage | +| ---------------- | ----------------------- | +| `ADMIN_USERNAME` | Admin username | +| `ADMIN_PASSWORD` | Password for admin user | + + +#### spec.appRef +appRef refers to the underlying application. It has 4 fields named `apiGroup`, `kind`, `name` & `namespace`. + +#### spec.clientConfig + +`spec.clientConfig` defines how to communicate with the target database. You can use either an URL or a Kubernetes service to connect with the database. You don't have to specify both of them. + +You can configure following fields in `spec.clientConfig` section: + +- **spec.clientConfig.url** + + `spec.clientConfig.url` gives the location of the database, in standard URL form (i.e. `[scheme://]host:port/[path]`). This is particularly useful when the target database is running outside of the Kubernetes cluster. If your database is running inside the cluster, use `spec.clientConfig.service` section instead. + +> Note that, attempting to use a user or basic auth (e.g. `user:password@host:port`) is not allowed. Stash will insert them automatically from the respective secret. Fragments ("#...") and query parameters ("?...") are not allowed either. + +- **spec.clientConfig.service** + + If you are running the database inside the Kubernetes cluster, you can use Kubernetes service to connect with the database. You have to specify the following fields in `spec.clientConfig.service` section if you manually create an `AppBinding` object. + + - **name :** `name` indicates the name of the service that connects with the target database. + - **scheme :** `scheme` specifies the scheme (i.e. http, https) to use to connect with the database. + - **port :** `port` specifies the port where the target database is running. + +- **spec.clientConfig.insecureSkipTLSVerify** + + `spec.clientConfig.insecureSkipTLSVerify` is used to disable TLS certificate verification while connecting with the database. We strongly discourage to disable TLS verification during backup. You should provide the respective CA bundle through `spec.clientConfig.caBundle` field instead. + +- **spec.clientConfig.caBundle** + + `spec.clientConfig.caBundle` is a PEM encoded CA bundle which will be used to validate the serving certificate of the database. + +## Next Steps + +- Learn how to use KubeDB to manage various databases [here](/docs/guides/README.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/rabbitmq/concepts/autoscaler.md b/docs/guides/rabbitmq/concepts/autoscaler.md new file mode 100644 index 0000000000..a854479d73 --- /dev/null +++ b/docs/guides/rabbitmq/concepts/autoscaler.md @@ -0,0 +1,222 @@ +--- +title: RabbitMQAutoscaler CRD +menu: + docs_{{ .version }}: + identifier: rm-autoscaler-concepts + name: RabbitMQAutoscaler + parent: rm-concepts-guides + weight: 26 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# RabbitMQAutoscaler + +## What is RabbitMQAutoscaler + +`RabbitMQAutoscaler` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for autoscaling [RabbitMQ](https://www.rabbitmq.com/) compute resources and storage of database components in a Kubernetes native way. + +## RabbitMQAutoscaler CRD Specifications + +Like any official Kubernetes resource, a `RabbitMQAutoscaler` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections. + +Here, some sample `RabbitMQAutoscaler` CROs for autoscaling different components of database is given below: + +**Sample `RabbitMQAutoscaler` for standalone database:** + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: RabbitMQAutoscaler +metadata: + name: mg-as + namespace: demo +spec: + databaseRef: + name: mg-standalone + opsRequestOptions: + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 3m + apply: IfReady + compute: + standalone: + trigger: "On" + podLifeTimeThreshold: 24h + minAllowed: + cpu: 250m + memory: 350Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 + storage: + standalone: + expansionMode: "Online" + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 +``` + +**Sample `RabbitMQAutoscaler` for replicaset database:** + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: RabbitMQAutoscaler +metadata: + name: mg-as-rs + namespace: demo +spec: + databaseRef: + name: mg-rs + opsRequestOptions: + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 3m + apply: IfReady + compute: + replicaSet: + trigger: "On" + podLifeTimeThreshold: 24h + minAllowed: + cpu: 200m + memory: 300Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 + storage: + replicaSet: + expansionMode: "Online" + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 +``` + +**Sample `RabbitMQAutoscaler` for sharded database:** + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: RabbitMQAutoscaler +metadata: + name: mg-as-sh + namespace: demo +spec: + databaseRef: + name: mg-sh + opsRequestOptions: + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 3m + apply: IfReady + compute: + shard: + trigger: "On" + podLifeTimeThreshold: 24h + minAllowed: + cpu: 250m + memory: 350Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 + configServer: + trigger: "On" + podLifeTimeThreshold: 24h + minAllowed: + cpu: 250m + memory: 350Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 + mongos: + trigger: "On" + podLifeTimeThreshold: 24h + minAllowed: + cpu: 250m + memory: 350Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 + storage: + shard: + expansionMode: "Online" + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 + configServer: + expansionMode: "Online" + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 +``` + +Here, we are going to describe the various sections of a `RabbitMQAutoscaler` crd. + +A `RabbitMQAutoscaler` object has the following fields in the `spec` section. + +### spec.databaseRef + +`spec.databaseRef` is a required field that point to the [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) object for which the autoscaling will be performed. This field consists of the following sub-field: + +- **spec.databaseRef.name :** specifies the name of the [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) object. + +### spec.opsRequestOptions +These are the options to pass in the internally created opsRequest CRO. `opsRequestOptions` has three fields. They have been described in details [here](/docs/guides/RabbitMQ/concepts/opsrequest.md#specreadinesscriteria). + +### spec.compute + +`spec.compute` specifies the autoscaling configuration for the compute resources i.e. cpu and memory of the database components. This field consists of the following sub-field: + +- `spec.compute.standalone` indicates the desired compute autoscaling configuration for a standalone RabbitMQ database. +- `spec.compute.replicaSet` indicates the desired compute autoscaling configuration for replicaSet of a RabbitMQ database. +- `spec.compute.configServer` indicates the desired compute autoscaling configuration for config servers of a sharded RabbitMQ database. +- `spec.compute.mongos` indicates the desired compute autoscaling configuration for the mongos nodes of a sharded RabbitMQ database. +- `spec.compute.shard` indicates the desired compute autoscaling configuration for the shard nodes of a sharded RabbitMQ database. +- `spec.compute.arbiter` indicates the desired compute autoscaling configuration for the arbiter node. + +All of them has the following sub-fields: + +- `trigger` indicates if compute autoscaling is enabled for this component of the database. If "On" then compute autoscaling is enabled. If "Off" then compute autoscaling is disabled. +- `minAllowed` specifies the minimal amount of resources that will be recommended, default is no minimum. +- `maxAllowed` specifies the maximum amount of resources that will be recommended, default is no maximum. +- `controlledResources` specifies which type of compute resources (cpu and memory) are allowed for autoscaling. Allowed values are "cpu" and "memory". +- `containerControlledValues` specifies which resource values should be controlled. Allowed values are "RequestsAndLimits" and "RequestsOnly". +- `resourceDiffPercentage` specifies the minimum resource difference between recommended value and the current value in percentage. If the difference percentage is greater than this value than autoscaling will be triggered. +- `podLifeTimeThreshold` specifies the minimum pod lifetime of at least one of the pods before triggering autoscaling. + +There are two more fields, those are only specifiable for the percona variant inMemory databases. +- `inMemoryStorage.UsageThresholdPercentage` If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased. +- `inMemoryStorage.ScalingFactorPercentage` If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased by this given scaling percentage. + +### spec.storage + +`spec.compute` specifies the autoscaling configuration for the storage resources of the database components. This field consists of the following sub-field: + +- `spec.compute.standalone` indicates the desired storage autoscaling configuration for a standalone RabbitMQ database. +- `spec.compute.replicaSet` indicates the desired storage autoscaling configuration for replicaSet of a RabbitMQ database. +- `spec.compute.configServer` indicates the desired storage autoscaling configuration for config servers of a sharded RabbitMQ database. +- `spec.compute.shard` indicates the desired storage autoscaling configuration for the shard nodes of a sharded RabbitMQ database. + +All of them has the following sub-fields: + +- `trigger` indicates if storage autoscaling is enabled for this component of the database. If "On" then storage autoscaling is enabled. If "Off" then storage autoscaling is disabled. +- `usageThreshold` indicates usage percentage threshold, if the current storage usage exceeds then storage autoscaling will be triggered. +- `scalingThreshold` indicates the percentage of the current storage that will be scaled. +- `expansionMode` indicates the volume expansion mode. diff --git a/docs/guides/rabbitmq/concepts/catalog.md b/docs/guides/rabbitmq/concepts/catalog.md new file mode 100644 index 0000000000..0d78ff221c --- /dev/null +++ b/docs/guides/rabbitmq/concepts/catalog.md @@ -0,0 +1,92 @@ +--- +title: RabbitMQVersion CRD +menu: + docs_{{ .version }}: + identifier: mg-catalog-concepts + name: RabbitMQVersion + parent: mg-concepts-RabbitMQ + weight: 15 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# RabbitMQVersion + +## What is RabbitMQVersion + +`RabbitMQVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for [RabbitMQ](https://www.rabbitmq.com/) database deployed with KubeDB in a Kubernetes native way. + +When you install KubeDB, a `RabbitMQVersion` custom resource will be created automatically for every supported RabbitMQ versions. You have to specify the name of `RabbitMQVersion` crd in `spec.version` field of [RabbitMQ](/docs/guides/rabbitmq/concepts/rabbitmq.md) crd. Then, KubeDB will use the docker images specified in the `RabbitMQVersion` crd to create your expected database. + +Using a separate crd for specifying respective docker images, and pod security policy names allow us to modify the images, and policies independent of KubeDB operator.This will also allow the users to use a custom image for the database. + +## RabbitMQVersion Spec + +As with all other Kubernetes objects, a RabbitMQVersion needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. Get `RabbitMQVersion` CR with a simple kubectl command. + +```bash +$ kubectl get rmversion 3.13.2 -oyaml +``` + +```yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: RabbitMQVersion +metadata: + annotations: + meta.helm.sh/release-name: kubedb-catalog + meta.helm.sh/release-namespace: kubedb + creationTimestamp: "2024-08-22T12:37:56Z" + generation: 2 + labels: + app.kubernetes.io/instance: kubedb-catalog + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: kubedb-catalog + app.kubernetes.io/version: v2024.8.21 + helm.sh/chart: kubedb-catalog-v2024.8.21 + name: 3.13.2 + resourceVersion: "262093" + uid: 8cc7b931-a22a-41eb-a9ba-9c3247436326 +spec: + db: + image: ghcr.io/appscode-images/rabbitmq:3.13.2-management-alpine + initContainer: + image: raihankhanraka/rabbitmq-init:3.13.2 + securityContext: + runAsUser: 999 + version: 3.13.2 +``` + +### metadata.name + +`metadata.name` is a required field that specifies the name of the `RabbitMQVersion` crd. You have to specify this name in `spec.version` field of [RabbitMQ](/docs/guides/rabbitmq/concepts/rabbitmq.md) crd. + +We follow this convention for naming RabbitMQVersion crd: + +- Name format: `{Original RabbitMQ image version}-{modification tag}` + +We modify original RabbitMQ docker image to support RabbitMQ clustering and re-tag the image with v1, v2 etc. modification tag. An image with higher modification tag will have more features than the images with lower modification tag. Hence, it is recommended to use RabbitMQVersion crd with the highest modification tag to enjoy the latest features. + +### spec.version + +`spec.version` is a required field that specifies the original version of RabbitMQ database that has been used to build the docker image specified in `spec.db.image` field. + +### spec.deprecated + +`spec.deprecated` is an optional field that specifies whether the docker images specified here is supported by the current KubeDB operator. + +The default value of this field is `false`. If `spec.deprecated` is set to `true`, KubeDB operator will skip processing this CRD object and will add an event to the CRD object specifying that the DB version is deprecated. + +### spec.db.image + +`spec.db.image` is a required field that specifies the docker image which will be used to create PetSet by KubeDB operator to create expected RabbitMQ database. + +### spec.initContainer.image +`spec.initContainer.image` is a required field that specifies the image for init container. + + +## Next Steps + +- Learn about RabbitMQ crd [here](/docs/guides/rabbitmq/concepts/rabbitmq.md). +- Deploy your first RabbitMQ database with KubeDB by following the guide [here](/docs/guides/rabbitmq/concepts/rabbitmq.md). diff --git a/docs/guides/rabbitmq/concepts/opsrequest.md b/docs/guides/rabbitmq/concepts/opsrequest.md new file mode 100644 index 0000000000..69739e286d --- /dev/null +++ b/docs/guides/rabbitmq/concepts/opsrequest.md @@ -0,0 +1,783 @@ +--- +title: RabbitMQOpsRequests CRD +menu: + docs_{{ .version }}: + identifier: mg-opsrequest-concepts + name: RabbitMQOpsRequest + parent: mg-concepts-RabbitMQ + weight: 25 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# RabbitMQOpsRequest + +## What is RabbitMQOpsRequest + +`RabbitMQOpsRequest` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for [RabbitMQ](https://www.RabbitMQ.com/) administrative operations like database version updating, horizontal scaling, vertical scaling etc. in a Kubernetes native way. + +## RabbitMQOpsRequest CRD Specifications + +Like any official Kubernetes resource, a `RabbitMQOpsRequest` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections. + +Here, some sample `RabbitMQOpsRequest` CRs for different administrative operations is given below: + +**Sample `RabbitMQOpsRequest` for updating database:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-update + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: mg-standalone + updateVersion: + targetVersion: 4.4.26 +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `RabbitMQOpsRequest` Objects for Horizontal Scaling of different component of the database:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-hscale-configserver + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: mg-sharding + horizontalScaling: + shard: + shards: 3 + replicas: 3 + configServer: + replicas: 3 + mongos: + replicas: 2 +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-hscale-down-replicaset + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: mg-replicaset + horizontalScaling: + replicas: 3 +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `RabbitMQOpsRequest` Objects for Vertical Scaling of different component of the database:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-vscale-configserver + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: mg-sharding + verticalScaling: + configServer: + resources: + requests: + memory: "150Mi" + cpu: "0.1" + limits: + memory: "250Mi" + cpu: "0.2" + mongos: + resources: + requests: + memory: "150Mi" + cpu: "0.1" + limits: + memory: "250Mi" + cpu: "0.2" + shard: + resources: + requests: + memory: "150Mi" + cpu: "0.1" + limits: + memory: "250Mi" + cpu: "0.2" +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-vscale-standalone + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: mg-standalone + verticalScaling: + standalone: + resources: + requests: + memory: "150Mi" + cpu: "0.1" + limits: + memory: "250Mi" + cpu: "0.2" +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-vscale-replicaset + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: mg-replicaset + verticalScaling: + replicaSet: + resources: + requests: + memory: "150Mi" + cpu: "0.1" + limits: + memory: "250Mi" + cpu: "0.2" +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `RabbitMQOpsRequest` Objects for Reconfiguring different database components:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-reconfiugre-data-replicaset + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-replicaset + configuration: + replicaSet: + applyConfig: + mongod.conf: |- + net: + maxIncomingConnections: 30000 +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-reconfiugre-data-shard + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-sharding + configuration: + shard: + applyConfig: + mongod.conf: |- + net: + maxIncomingConnections: 30000 + configServer: + applyConfig: + mongod.conf: |- + net: + maxIncomingConnections: 30000 + mongos: + applyConfig: + mongod.conf: |- + net: + maxIncomingConnections: 30000 +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-reconfiugre-data-standalone + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-standalone + configuration: + standalone: + applyConfig: + mongod.conf: |- + net: + maxIncomingConnections: 30000 +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-reconfiugre-replicaset + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-replicaset + configuration: + replicaSet: + configSecret: + name: new-custom-config +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-reconfiugre-shard + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-sharding + configuration: + shard: + configSecret: + name: new-custom-config + configServer: + configSecret: + name: new-custom-config + mongos: + configSecret: + name: new-custom-config +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-reconfiugre-standalone + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-standalone + configuration: + standalone: + configSecret: + name: new-custom-config +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `RabbitMQOpsRequest` Objects for Volume Expansion of different database components:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-volume-exp-replicaset + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: mg-replicaset + volumeExpansion: + mode: "Online" + replicaSet: 2Gi +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-volume-exp-shard + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: mg-sharding + volumeExpansion: + mode: "Online" + shard: 2Gi + configServer: 2Gi +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-volume-exp-standalone + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: mg-standalone + volumeExpansion: + mode: "Online" + standalone: 2Gi +status: + conditions: + - lastTransitionTime: "2020-08-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `RabbitMQOpsRequest` Objects for Reconfiguring TLS of the database:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-add-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mg-rs + tls: + issuerRef: + name: mg-issuer + kind: Issuer + apiGroup: "cert-manager.io" + certificates: + - alias: client + emailAddresses: + - abc@appscode.com +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-rotate + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mg-rs + tls: + rotateCertificates: true +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-change-issuer + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mg-rs + tls: + issuerRef: + name: mg-new-issuer + kind: Issuer + apiGroup: "cert-manager.io" +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-remove + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mg-rs + tls: + remove: true +``` + +Here, we are going to describe the various sections of a `RabbitMQOpsRequest` crd. + +A `RabbitMQOpsRequest` object has the following fields in the `spec` section. + +### spec.databaseRef + +`spec.databaseRef` is a required field that point to the [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) object for which the administrative operations will be performed. This field consists of the following sub-field: + +- **spec.databaseRef.name :** specifies the name of the [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) object. + +### spec.type + +`spec.type` specifies the kind of operation that will be applied to the database. Currently, the following types of operations are allowed in `RabbitMQOpsRequest`. + +- `Upgrade` / `UpdateVersion` +- `HorizontalScaling` +- `VerticalScaling` +- `VolumeExpansion` +- `Reconfigure` +- `ReconfigureTLS` +- `Restart` + +> You can perform only one type of operation on a single `RabbitMQOpsRequest` CR. For example, if you want to update your database and scale up its replica then you have to create two separate `RabbitMQOpsRequest`. At first, you have to create a `RabbitMQOpsRequest` for updating. Once it is completed, then you can create another `RabbitMQOpsRequest` for scaling. + +> Note: There is an exception to the above statement. It is possible to specify both `spec.configuration` & `spec.verticalScaling` in a OpsRequest of type `VerticalScaling`. + +### spec.updateVersion + +If you want to update you RabbitMQ version, you have to specify the `spec.updateVersion` section that specifies the desired version information. This field consists of the following sub-field: + +- `spec.updateVersion.targetVersion` refers to a [RabbitMQVersion](/docs/guides/RabbitMQ/concepts/catalog.md) CR that contains the RabbitMQ version information where you want to update. + +Have a look on the [`updateConstraints`](/docs/guides/RabbitMQ/concepts/catalog.md#specupdateconstraints) of the RabbitMQVersion spec to know which versions are supported for updating from the current version. +```yaml +kubectl get mgversion -o=jsonpath='{.spec.updateConstraints}' | jq +``` + +> You can only update between RabbitMQ versions. KubeDB does not support downgrade for RabbitMQ. + +### spec.horizontalScaling + +If you want to scale-up or scale-down your RabbitMQ cluster or different components of it, you have to specify `spec.horizontalScaling` section. This field consists of the following sub-field: + +- `spec.horizontalScaling.replicas` indicates the desired number of nodes for RabbitMQ replicaset cluster after scaling. For example, if your cluster currently has 4 replicaset nodes, and you want to add additional 2 nodes then you have to specify 6 in `spec.horizontalScaling.replicas` field. Similarly, if you want to remove one node from the cluster, you have to specify 3 in `spec.horizontalScaling.replicas` field. +- `spec.horizontalScaling.configServer.replicas` indicates the desired number of ConfigServer nodes for Sharded RabbitMQ cluster after scaling. +- `spec.horizontalScaling.mongos.replicas` indicates the desired number of Mongos nodes for Sharded RabbitMQ cluster after scaling. +- `spec.horizontalScaling.shard` indicates the configuration of shard nodes for Sharded RabbitMQ cluster after scaling. This field consists of the following sub-field: + - `spec.horizontalScaling.shard.replicas` indicates the number of replicas each shard will have after scaling. + - `spec.horizontalScaling.shard.shards` indicates the number of shards after scaling + +### spec.verticalScaling + +`spec.verticalScaling` is a required field specifying the information of `RabbitMQ` resources like `cpu`, `memory` etc that will be scaled. This field consists of the following sub-fields: + +- `spec.verticalScaling.standalone` indicates the desired resources for standalone RabbitMQ database after scaling. +- `spec.verticalScaling.replicaSet` indicates the desired resources for replicaSet of RabbitMQ database after scaling. +- `spec.verticalScaling.mongos` indicates the desired resources for Mongos nodes of Sharded RabbitMQ database after scaling. +- `spec.verticalScaling.configServer` indicates the desired resources for ConfigServer nodes of Sharded RabbitMQ database after scaling. +- `spec.verticalScaling.shard` indicates the desired resources for Shard nodes of Sharded RabbitMQ database after scaling. +- `spec.verticalScaling.exporter` indicates the desired resources for the `exporter` container. +- `spec.verticalScaling.arbiter` indicates the desired resources for arbiter node of RabbitMQ database after scaling. +- `spec.verticalScaling.coordinator` indicates the desired resources for the coordinator container. + +All of them has the below structure: + +```yaml +requests: + memory: "200Mi" + cpu: "0.1" +limits: + memory: "300Mi" + cpu: "0.2" +``` + +Here, when you specify the resource request, the scheduler uses this information to decide which node to place the container of the Pod on and when you specify a resource limit for the container, the `kubelet` enforces those limits so that the running container is not allowed to use more of that resource than the limit you set. You can found more details from [here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/). + +### spec.volumeExpansion + +> To use the volume expansion feature the storage class must support volume expansion + +If you want to expand the volume of your RabbitMQ cluster or different components of it, you have to specify `spec.volumeExpansion` section. This field consists of the following sub-field: + +- `spec.mode` specifies the volume expansion mode. Supported values are `Online` & `Offline`. The default is `Online`. +- `spec.volumeExpansion.standalone` indicates the desired size for the persistent volume of a standalone RabbitMQ database. +- `spec.volumeExpansion.replicaSet` indicates the desired size for the persistent volume of replicaSets of a RabbitMQ database. +- `spec.volumeExpansion.configServer` indicates the desired size for the persistent volume of the config server of a sharded RabbitMQ database. +- `spec.volumeExpansion.shard` indicates the desired size for the persistent volume of shards of a sharded RabbitMQ database. + +All of them refer to [Quantity](https://v1-22.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#quantity-resource-core) types of Kubernetes. + +Example usage of this field is given below: + +```yaml +spec: + volumeExpansion: + shard: "2Gi" +``` + +This will expand the volume size of all the shard nodes to 2 GB. + +### spec.configuration + +If you want to reconfigure your Running RabbitMQ cluster or different components of it with new custom configuration, you have to specify `spec.configuration` section. This field consists of the following sub-field: + +- `spec.configuration.standalone` indicates the desired new custom configuration for a standalone RabbitMQ database. +- `spec.configuration.replicaSet` indicates the desired new custom configuration for replicaSet of a RabbitMQ database. +- `spec.configuration.configServer` indicates the desired new custom configuration for config servers of a sharded RabbitMQ database. +- `spec.configuration.mongos` indicates the desired new custom configuration for the mongos nodes of a sharded RabbitMQ database. +- `spec.configuration.shard` indicates the desired new custom configuration for the shard nodes of a sharded RabbitMQ database. +- `spec.verticalScaling.arbiter` indicates the desired new custom configuration for arbiter node of RabbitMQ database after scaling. + +All of them has the following sub-fields: + +- `configSecret` points to a secret in the same namespace of a RabbitMQ resource, which contains the new custom configurations. If there are any configSecret set before in the database, this secret will replace it. +- `applyConfig` contains the new custom config as a string which will be merged with the previous configuration. + +- `applyConfig` is a map where key supports 3 values, namely `mongod.conf`, `replicaset.json`, `configuration.js`. And value represents the corresponding configurations. +For your information, replicaset.json is used to modify replica set configurations, which we see in the output of `rs.config()`. And `configurarion.js` is used to apply a js script to configure RabbitMQ at runtime. +KubeDB provisioner operator applies these two directly while reconciling. + +```yaml + applyConfig: + configuration.js: | + print("hello world!!!!") + replicaset.json: | + { + "settings" : { + "electionTimeoutMillis" : 4000 + } + } + mongod.conf: | + net: + maxIncomingConnections: 30000 +``` + +- `removeCustomConfig` is a boolean field. Specify this field to true if you want to remove all the custom configuration from the deployed RabbitMQ server. + +### spec.tls + +If you want to reconfigure the TLS configuration of your database i.e. add TLS, remove TLS, update issuer/cluster issuer or Certificates and rotate the certificates, you have to specify `spec.tls` section. This field consists of the following sub-field: + +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. +- `spec.tls.certificates` specifies the certificates. You can learn more about this field from [here](/docs/guides/RabbitMQ/concepts/RabbitMQ.md#spectls). +- `spec.tls.rotateCertificates` specifies that we want to rotate the certificate of this database. +- `spec.tls.remove` specifies that we want to remove tls from this database. + +### spec.readinessCriteria + +`spec.readinessCriteria` is the criteria for checking readiness of a RabbitMQ pod after restarting it. It has two fields. +- `spec.readinessCriteria.oplogMaxLagSeconds` defines the maximum allowed lagging time between the primary & secondary. +- `spec.readinessCriteria.objectsCountDiffPercentage` denotes the maximum allowed object-count-difference between the primary & secondary. + +```yaml +... +spec: + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 +... +``` +Exceeding these thresholds results in opsRequest failure. One thing to note that, readinessCriteria field will make impact only if pod restarting is associated with the opsRequest type. + +### spec.timeout +As we internally retry the ops request steps multiple times, This `timeout` field helps the users to specify the timeout for those steps of the ops request (in second). +If a step doesn't finish within the specified timeout, the ops request will result in failure. + +### spec.apply +This field controls the execution of obsRequest depending on the database state. It has two supported values: `Always` & `IfReady`. +Use IfReady, if you want to process the opsRequest only when the database is Ready. And use Always, if you want to process the execution of opsReq irrespective of the Database state. + + +### RabbitMQOpsRequest `Status` + +`.status` describes the current state and progress of a `RabbitMQOpsRequest` operation. It has the following fields: + +### status.phase + +`status.phase` indicates the overall phase of the operation for this `RabbitMQOpsRequest`. It can have the following three values: + +| Phase | Meaning | +|-------------|------------------------------------------------------------------------------------| +| Successful | KubeDB has successfully performed the operation requested in the RabbitMQOpsRequest | +| Progressing | KubeDB has started the execution of the applied RabbitMQOpsRequest | +| Failed | KubeDB has failed the operation requested in the RabbitMQOpsRequest | +| Denied | KubeDB has denied the operation requested in the RabbitMQOpsRequest | +| Skipped | KubeDB has skipped the operation requested in the RabbitMQOpsRequest | + +Important: Ops-manager Operator can skip an opsRequest, only if its execution has not been started yet & there is a newer opsRequest applied in the cluster. `spec.type` has to be same as the skipped one, in this case. + +### status.observedGeneration + +`status.observedGeneration` shows the most recent generation observed by the `RabbitMQOpsRequest` controller. + +### status.conditions + +`status.conditions` is an array that specifies the conditions of different steps of `RabbitMQOpsRequest` processing. Each condition entry has the following fields: + +- `types` specifies the type of the condition. RabbitMQOpsRequest has the following types of conditions: + +| Type | Meaning | +| ----------------------------- | ------------------------------------------------------------------------- | +| `Progressing` | Specifies that the operation is now in the progressing state | +| `Successful` | Specifies such a state that the operation on the database was successful. | +| `HaltDatabase` | Specifies such a state that the database is halted by the operator | +| `ResumeDatabase` | Specifies such a state that the database is resumed by the operator | +| `Failed` | Specifies such a state that the operation on the database failed. | +| `StartingBalancer` | Specifies such a state that the balancer has successfully started | +| `StoppingBalancer` | Specifies such a state that the balancer has successfully stopped | +| `UpdateShardImage` | Specifies such a state that the Shard Images has been updated | +| `UpdateReplicaSetImage` | Specifies such a state that the Replicaset Image has been updated | +| `UpdateConfigServerImage` | Specifies such a state that the ConfigServer Image has been updated | +| `UpdateMongosImage` | Specifies such a state that the Mongos Image has been updated | +| `UpdateStatefulSetResources` | Specifies such a state that the Statefulset resources has been updated | +| `UpdateShardResources` | Specifies such a state that the Shard resources has been updated | +| `UpdateReplicaSetResources` | Specifies such a state that the Replicaset resources has been updated | +| `UpdateConfigServerResources` | Specifies such a state that the ConfigServer resources has been updated | +| `UpdateMongosResources` | Specifies such a state that the Mongos resources has been updated | +| `ScaleDownReplicaSet` | Specifies such a state that the scale down operation of replicaset | +| `ScaleUpReplicaSet` | Specifies such a state that the scale up operation of replicaset | +| `ScaleUpShardReplicas` | Specifies such a state that the scale up operation of shard replicas | +| `ScaleDownShardReplicas` | Specifies such a state that the scale down operation of shard replicas | +| `ScaleDownConfigServer` | Specifies such a state that the scale down operation of config server | +| `ScaleUpConfigServer` | Specifies such a state that the scale up operation of config server | +| `ScaleMongos` | Specifies such a state that the scale down operation of replicaset | +| `VolumeExpansion` | Specifies such a state that the volume expansion operaton of the database | +| `ReconfigureReplicaset` | Specifies such a state that the reconfiguration of replicaset nodes | +| `ReconfigureMongos` | Specifies such a state that the reconfiguration of mongos nodes | +| `ReconfigureShard` | Specifies such a state that the reconfiguration of shard nodes | +| `ReconfigureConfigServer` | Specifies such a state that the reconfiguration of config server nodes | + +- The `status` field is a string, with possible values `True`, `False`, and `Unknown`. + - `status` will be `True` if the current transition succeeded. + - `status` will be `False` if the current transition failed. + - `status` will be `Unknown` if the current transition was denied. +- The `message` field is a human-readable message indicating details about the condition. +- The `reason` field is a unique, one-word, CamelCase reason for the condition's last transition. +- The `lastTransitionTime` field provides a timestamp for when the operation last transitioned from one state to another. +- The `observedGeneration` shows the most recent condition transition generation observed by the controller. diff --git a/docs/guides/rabbitmq/concepts/rabbitmq.md b/docs/guides/rabbitmq/concepts/rabbitmq.md new file mode 100644 index 0000000000..0b138f9d37 --- /dev/null +++ b/docs/guides/rabbitmq/concepts/rabbitmq.md @@ -0,0 +1,639 @@ +--- +title: RabbitMQ CRD +menu: + docs_{{ .version }}: + identifier: mg-RabbitMQ-concepts + name: RabbitMQ + parent: mg-concepts-RabbitMQ + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# RabbitMQ + +## What is RabbitMQ + +`RabbitMQ` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [RabbitMQ](https://www.RabbitMQ.com/) in a Kubernetes native way. You only need to describe the desired database configuration in a RabbitMQ object, and the KubeDB operator will create Kubernetes objects in the desired state for you. + +## RabbitMQ Spec + +As with all other Kubernetes objects, a RabbitMQ needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. Below is an example RabbitMQ object. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mgo1 + namespace: demo +spec: + autoOps: + disabled: true + version: "4.4.26" + replicas: 3 + authSecret: + name: mgo1-auth + externallyManaged: false + replicaSet: + name: rs0 + shardTopology: + configServer: + podTemplate: {} + replicas: 3 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + podTemplate: {} + replicas: 2 + shard: + podTemplate: {} + replicas: 3 + shards: 3 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + sslMode: requireSSL + tls: + issuerRef: + name: mongo-ca-issuer + kind: Issuer + apiGroup: "cert-manager.io" + certificates: + - alias: client + subject: + organizations: + - kubedb + emailAddresses: + - abc@appscode.com + - alias: server + subject: + organizations: + - kubedb + emailAddresses: + - abc@appscode.com + clusterAuthMode: x509 + storageType: "Durable" + storageEngine: wiredTiger + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + ephemeralStorage: + medium: "Memory" + sizeLimit: 500Mi + init: + script: + configMap: + name: mg-init-script + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + app: kubedb + interval: 10s + configSecret: + name: mg-custom-config + podTemplate: + metadata: + annotations: + passMe: ToDatabasePod + labels: + thisLabel: willGoToPod + controller: + annotations: + passMe: ToStatefulSet + labels: + thisLabel: willGoToSts + spec: + serviceAccountName: my-service-account + schedulerName: my-scheduler + nodeSelector: + disktype: ssd + imagePullSecrets: + - name: myregistrykey + args: + - --maxConns=100 + env: + - name: MONGO_INITDB_DATABASE + value: myDB + resources: + requests: + memory: "64Mi" + cpu: "250m" + limits: + memory: "128Mi" + cpu: "500m" + serviceTemplates: + - alias: primary + spec: + type: NodePort + ports: + - name: primary + port: 27017 + nodePort: 300006 + terminationPolicy: Halt + halted: false + arbiter: + podTemplate: + spec: + resources: + requests: + cpu: "200m" + memory: "200Mi" + configSecret: + name: another-config + allowedSchemas: + namespaces: + from: Selector + selector: + matchExpressions: + - {key: kubernetes.io/metadata.name, operator: In, values: [dev]} + selector: + matchLabels: + "schema.kubedb.com": "mongo" + coordinator: + resources: + requests: + cpu: "300m" + memory: 500Mi + securityContext: + runAsUser: 1001 + healthChecker: + periodSeconds: 15 + timeoutSeconds: 10 + failureThreshold: 2 + disableWriteCheck: false +``` + +### spec.autoOps +AutoOps is an optional field to control the generation of versionUpdate & TLS-related recommendations. + +### spec.version + +`spec.version` is a required field specifying the name of the [RabbitMQVersion](/docs/guides/RabbitMQ/concepts/catalog.md) crd where the docker images are specified. Currently, when you install KubeDB, it creates the following `RabbitMQVersion` resources, + +- `3.4.17-v1`, `3.4.22-v1` +- `3.6.13-v1`, `4.4.26`, +- `4.0.3-v1`, `4.4.26`, `4.0.11-v1`, +- `4.1.4-v1`, `4.1.7-v3`, `4.4.26` +- `4.4.26`, `4.4.26` +- `5.0.2`, `5.0.3` +- `percona-3.6.18` +- `percona-4.0.10`, `percona-4.2.7`, `percona-4.4.10` + +### spec.replicas + +`spec.replicas` the number of members(primary & secondary) in RabbitMQ replicaset. + +If `spec.shardTopology` is set, then `spec.replicas` needs to be empty. Instead use `spec.shardTopology..replicas` + +If both `spec.replicaset` and `spec.shardTopology` is not set, then `spec.replicas` can be value `1`. + +KubeDB uses `PodDisruptionBudget` to ensure that majority of these replicas are available during [voluntary disruptions](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions) so that quorum is maintained. + +### spec.authSecret + +`spec.authSecret` is an optional field that points to a Secret used to hold credentials for `RabbitMQ` superuser. If not set, KubeDB operator creates a new Secret `{RabbitMQ-object-name}-auth` for storing the password for `RabbitMQ` superuser for each RabbitMQ object. + +We can use this field in 3 mode. +1. Using an external secret. In this case, You need to create an auth secret first with required fields, then specify the secret name when creating the RabbitMQ object using `spec.authSecret.name` & set `spec.authSecret.externallyManaged` to true. +```yaml +authSecret: + name: + externallyManaged: true +``` + +2. Specifying the secret name only. In this case, You need to specify the secret name when creating the RabbitMQ object using `spec.authSecret.name`. `externallyManaged` is by default false. +```yaml +authSecret: + name: +``` + +3. Let KubeDB do everything for you. In this case, no work for you. + +AuthSecret contains a `user` key and a `password` key which contains the `username` and `password` respectively for `RabbitMQ` superuser. + +Example: + +```bash +$ kubectl create secret generic mgo1-auth -n demo \ +--from-literal=username=jhon-doe \ +--from-literal=password=6q8u_2jMOW-OOZXk +secret "mgo1-auth" created +``` + +```yaml +apiVersion: v1 +data: + password: NnE4dV8yak1PVy1PT1pYaw== + username: amhvbi1kb2U= +kind: Secret +metadata: + name: mgo1-auth + namespace: demo +type: Opaque +``` + +Secrets provided by users are not managed by KubeDB, and therefore, won't be modified or garbage collected by the KubeDB operator (version 0.13.0 and higher). + +### spec.replicaSet + +`spec.replicaSet` represents the configuration for replicaset. When `spec.replicaSet` is set, KubeDB will deploy a RabbitMQ replicaset where number of replicaset member is spec.replicas. + +- `name` denotes the name of RabbitMQ replicaset. +NB. If `spec.shardTopology` is set, then `spec.replicaset` needs to be empty. + +### spec.keyFileSecret +`keyFileSecret.name` denotes the name of the secret that contains the `key.txt`, which provides the security between replicaset members using internal authentication. See [Keyfile Authentication](https://docs.RabbitMQ.com/manual/tutorial/enforce-keyfile-access-control-in-existing-replica-set/) for more information. +It will make impact only if the ClusterAuthMode is `keyFile` or `sendKeyFile`. + +### spec.shardTopology + +`spec.shardTopology` represents the topology configuration for sharding. + +Available configurable fields: + +- shard +- configServer +- mongos + +When `spec.shardTopology` is set, the following fields needs to be empty, otherwise validating webhook will throw error. + +- `spec.replicas` +- `spec.podTemplate` +- `spec.configSecret` +- `spec.storage` +- `spec.ephemeralStorage` + +KubeDB uses `PodDisruptionBudget` to ensure that majority of the replicas of these shard components are available during [voluntary disruptions](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions) so that quorum and data integrity is maintained. + +#### spec.shardTopology.shard + +`shard` represents configuration for Shard component of RabbitMQ. + +Available configurable fields: + +- `shards` represents number of shards for a RabbitMQ deployment. Each shard is deployed as a [replicaset](/docs/guides/RabbitMQ/clustering/replication_concept.md). +- `replicas` represents number of replicas of each shard replicaset. +- `prefix` represents the prefix of each shard node. +- `configSecret` is an optional field to provide custom configuration file for shards (i.e. mongod.cnf). If specified, this file will be used as configuration file otherwise a default configuration file will be used. See below to know about [spec.configSecret](/docs/guides/RabbitMQ/concepts/RabbitMQ.md#specconfigsecret) in details. +- `podTemplate` is an optional configuration for pods. See below to know about [spec.podTemplate](/docs/guides/RabbitMQ/concepts/RabbitMQ.md#specpodtemplate) in details. +- `storage` to specify pvc spec for each node of sharding. You can specify any StorageClass available in your cluster with appropriate resource requests. See below to know about [spec.storage](/docs/guides/RabbitMQ/concepts/RabbitMQ.md#specstorage) in details. +- `ephemeralStorage` to specify the configuration of ephemeral storage type, If you want to use volatile temporary storage attached to your instances which is only present during the running lifetime of the instance. + +#### spec.shardTopology.configServer + +`configServer` represents configuration for ConfigServer component of RabbitMQ. + +Available configurable fields: + +- `replicas` represents number of replicas for configServer replicaset. Here, configServer is deployed as a replicaset of RabbitMQ. +- `prefix` represents the prefix of configServer nodes. +- `configSecret` is an optional field to provide custom configuration file for config server (i.e mongod.cnf). If specified, this file will be used as configuration file otherwise a default configuration file will be used. See below to know about [spec.configSecret](/docs/guides/RabbitMQ/concepts/RabbitMQ.md#specconfigsecret) in details. +- `podTemplate` is an optional configuration for pods. See below to know about [spec.podTemplate](/docs/guides/RabbitMQ/concepts/RabbitMQ.md#specpodtemplate) in details. +- `storage` to specify pvc spec for each node of configServer. You can specify any StorageClass available in your cluster with appropriate resource requests. See below to know about [spec.storage](/docs/guides/RabbitMQ/concepts/RabbitMQ.md#specstorage) in details. +- `ephemeralStorage` to specify the configuration of ephemeral storage type, If you want to use volatile temporary storage attached to your instances which is only present during the running lifetime of the instance. + +#### spec.shardTopology.mongos + +`mongos` represents configuration for Mongos component of RabbitMQ. + +Available configurable fields: + +- `replicas` represents number of replicas of `Mongos` instance. Here, Mongos is deployed as stateless (deployment) instance. +- `prefix` represents the prefix of mongos nodes. +- `configSecret` is an optional field to provide custom configuration file for mongos (i.e. mongod.cnf). If specified, this file will be used as configuration file otherwise a default configuration file will be used. See below to know about [spec.configSecret](/docs/guides/RabbitMQ/concepts/RabbitMQ.md#specconfigsecret) in details. +- `podTemplate` is an optional configuration for pods. See below to know about [spec.podTemplate](/docs/guides/RabbitMQ/concepts/RabbitMQ.md#specpodtemplate) in details. + +### spec.sslMode + +Enables TLS/SSL or mixed TLS/SSL used for all network connections. The value of [`sslMode`](https://docs.RabbitMQ.com/manual/reference/program/mongod/#cmdoption-mongod-sslmode) field can be one of the following: + +| Value | Description | +| :----------: | :----------------------------------------------------------------------------------------------------------------------------- | +| `disabled` | The server does not use TLS/SSL. | +| `allowSSL` | Connections between servers do not use TLS/SSL. For incoming connections, the server accepts both TLS/SSL and non-TLS/non-SSL. | +| `preferSSL` | Connections between servers use TLS/SSL. For incoming connections, the server accepts both TLS/SSL and non-TLS/non-SSL. | +| `requireSSL` | The server uses and accepts only TLS/SSL encrypted connections. | + +### spec.tls + +`spec.tls` specifies the TLS/SSL configurations for the RabbitMQ. KubeDB uses [cert-manager](https://cert-manager.io/) v1 api to provision and manage TLS certificates. + +The following fields are configurable in the `spec.tls` section: + +- `issuerRef` is a reference to the `Issuer` or `ClusterIssuer` CR of [cert-manager](https://cert-manager.io/docs/concepts/issuer/) that will be used by `KubeDB` to generate necessary certificates. + + - `apiGroup` is the group name of the resource that is being referenced. Currently, the only supported value is `cert-manager.io`. + - `kind` is the type of resource that is being referenced. KubeDB supports both `Issuer` and `ClusterIssuer` as values for this field. + - `name` is the name of the resource (`Issuer` or `ClusterIssuer`) being referenced. + +- `certificates` (optional) are a list of certificates used to configure the server and/or client certificate. It has the following fields: + - `alias` represents the identifier of the certificate. It has the following possible value: + - `server` is used for server certificate identification. + - `client` is used for client certificate identification. + - `metrics-exporter` is used for metrics exporter certificate identification. + - `secretName` (optional) specifies the k8s secret name that holds the certificates. + > This field is optional. If the user does not specify this field, the default secret name will be created in the following format: `--cert`. + + - `subject` (optional) specifies an `X.509` distinguished name. It has the following possible field, + - `organizations` (optional) are the list of different organization names to be used on the Certificate. + - `organizationalUnits` (optional) are the list of different organization unit name to be used on the Certificate. + - `countries` (optional) are the list of country names to be used on the Certificate. + - `localities` (optional) are the list of locality names to be used on the Certificate. + - `provinces` (optional) are the list of province names to be used on the Certificate. + - `streetAddresses` (optional) are the list of a street address to be used on the Certificate. + - `postalCodes` (optional) are the list of postal code to be used on the Certificate. + - `serialNumber` (optional) is a serial number to be used on the Certificate. + You can find more details from [Here](https://golang.org/pkg/crypto/x509/pkix/#Name) + - `duration` (optional) is the period during which the certificate is valid. + - `renewBefore` (optional) is a specifiable time before expiration duration. + - `dnsNames` (optional) is a list of subject alt names to be used in the Certificate. + - `ipAddresses` (optional) is a list of IP addresses to be used in the Certificate. + - `uris` (optional) is a list of URI Subject Alternative Names to be set in the Certificate. + - `emailAddresses` (optional) is a list of email Subject Alternative Names to be set in the Certificate. + - `privateKey` (optional) specifies options to control private keys used for the Certificate. + - `encoding` (optional) is the private key cryptography standards (PKCS) encoding for this certificate's private key to be encoded in. If provided, allowed values are "pkcs1" and "pkcs8" standing for PKCS#1 and PKCS#8, respectively. It defaults to PKCS#1 if not specified. + +### spec.clusterAuthMode + +The authentication mode used for cluster authentication. This option can have one of the following values: + +| Value | Description | +| :-----------: | :------------------------------------------------------------------------------------------------------------------------------- | +| `keyFile` | Use a keyfile for authentication. Accept only keyfiles. | +| `sendKeyFile` | For rolling update purposes. Send a keyfile for authentication but can accept both keyfiles and x.509 certificates. | +| `sendX509` | For rolling update purposes. Send the x.509 certificate for authentication but can accept both keyfiles and x.509 certificates. | +| `x509` | Recommended. Send the x.509 certificate for authentication and accept only x.509 certificates. | + +### spec.storageType + +`spec.storageType` is an optional field that specifies the type of storage to use for database. It can be either `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create RabbitMQ database using [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) volume. +In this case, you don't have to specify `spec.storage` field. Specify `spec.ephemeralStorage` spec instead. + +### spec.storageEngine + +`spec.storageEngine` is an optional field that specifies the type of storage engine is going to be used by RabbitMQ. There are two types of storage engine, `wiredTiger` and `inMemory`. Default value of storage engine is `wiredTiger`. `inMemory` storage engine is only supported by the percona variant of RabbitMQ, i.e. the version that has the `percona-` prefix in the RabbitMQ-version name. + +### spec.storage + +Since 0.9.0-rc.0, If you set `spec.storageType:` to `Durable`, then `spec.storage` is a required field that specifies the StorageClass of PVCs dynamically allocated to store data for the database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests. + +- `spec.storage.storageClassName` is the name of the StorageClass used to provision PVCs. PVCs don’t necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster depending on whether the DefaultStorageClass admission plugin is turned on. +- `spec.storage.accessModes` uses the same conventions as Kubernetes PVCs when requesting storage with specific access modes. +- `spec.storage.resources` can be used to request specific quantities of storage. This follows the same resource model used by PVCs. + +To learn how to configure `spec.storage`, please visit the links below: + +- https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims + +NB. If `spec.shardTopology` is set, then `spec.storage` needs to be empty. Instead use `spec.shardTopology..storage` + +### spec.ephemeralStorage +Use this field to specify the configuration of ephemeral storage type, If you want to use volatile temporary storage attached to your instances which is only present during the running lifetime of the instance. +- `spec.ephemeralStorage.medium` refers to the name of the storage medium. +- `spec.ephemeralStorage.sizeLimit` to specify the sizeLimit of the emptyDir volume. + +For more details of these two fields, see [EmptyDir struct](https://github.com/kubernetes/api/blob/ed22bb34e3bbae9e2fafba51d66ee3f68ee304b2/core/v1/types.go#L700-L715) + +### spec.init + +`spec.init` is an optional section that can be used to initialize a newly created RabbitMQ database. RabbitMQ databases can be initialized by Script. + +`Initialize from Snapshot` is still not supported. + +#### Initialize via Script + +To initialize a RabbitMQ database using a script (shell script, js script), set the `spec.init.script` section when creating a RabbitMQ object. It will execute files alphabetically with extensions `.sh` and `.js` that are found in the repository. script must have the following information: + +- [VolumeSource](https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes): Where your script is loaded from. + +Below is an example showing how a script from a configMap can be used to initialize a RabbitMQ database. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mgo1 + namespace: demo +spec: + version: 4.4.26 + init: + script: + configMap: + name: RabbitMQ-init-script +``` + +In the above example, KubeDB operator will launch a Job to execute all js script of `RabbitMQ-init-script` in alphabetical order once StatefulSet pods are running. For more details tutorial on how to initialize from script, please visit [here](/docs/guides/RabbitMQ/initialization/using-script.md). + +These are the fields of `spec.init` which you can make use of : +- `spec.init.initialized` indicating that this database has been initialized or not. `false` by default. +- `spec.init.script.scriptPath` to specify where all the init scripts should be mounted. +- `spec.init.script.` as described in the above example. To see all the volumeSource options go to [VolumeSource](https://github.com/kubernetes/api/blob/ed22bb34e3bbae9e2fafba51d66ee3f68ee304b2/core/v1/types.go#L49). +- `spec.init.waitForInitialRestore` to tell the operator if it should wait for the initial restore process or not. + +### spec.monitor + +RabbitMQ managed by KubeDB can be monitored with builtin-Prometheus and Prometheus operator out-of-the-box. To learn more, + +- [Monitor RabbitMQ with builtin Prometheus](/docs/guides/RabbitMQ/monitoring/using-builtin-prometheus.md) +- [Monitor RabbitMQ with Prometheus operator](/docs/guides/RabbitMQ/monitoring/using-prometheus-operator.md) + +### spec.configSecret + +`spec.configSecret` is an optional field that allows users to provide custom configuration for RabbitMQ. You can provide the custom configuration in a secret, then you can specify the secret name `spec.configSecret.name`. + +> Please note that, the secret key needs to be `mongod.conf`. + +To learn more about how to use a custom configuration file see [here](/docs/guides/RabbitMQ/configuration/using-config-file.md). + +NB. If `spec.shardTopology` is set, then `spec.configSecret` needs to be empty. Instead use `spec.shardTopology..configSecret` + +### spec.podTemplate + +KubeDB allows providing a template for database pod through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the StatefulSet created for RabbitMQ database. + +KubeDB accept following fields to set in `spec.podTemplate:` + +- metadata: + - annotations (pod's annotation) + - labels (pod's labels) +- controller: + - annotations (statefulset's annotation) + - labels (statefulset's labels) +- spec: + - args + - env + - resources + - initContainers + - imagePullSecrets + - nodeSelector + - affinity + - serviceAccountName + - schedulerName + - tolerations + - priorityClassName + - priority + - securityContext + - livenessProbe + - readinessProbe + - lifecycle + +You can checkout the full list [here](https://github.com/kmodules/offshoot-api/blob/ea366935d5bad69d7643906c7556923271592513/api/v1/types.go#L42-L259). Uses of some field of `spec.podTemplate` is described below, + +NB. If `spec.shardTopology` is set, then `spec.podTemplate` needs to be empty. Instead use `spec.shardTopology..podTemplate` + +#### spec.podTemplate.spec.args + +`spec.podTemplate.spec.args` is an optional field. This can be used to provide additional arguments to database installation. To learn about available args of `mongod`, visit [here](https://docs.RabbitMQ.com/manual/reference/program/mongod/). + +#### spec.podTemplate.spec.env + +`spec.podTemplate.spec.env` is an optional field that specifies the environment variables to pass to the RabbitMQ docker image. To know about supported environment variables, please visit [here](https://hub.docker.com/r/_/mongo/). + +Note that, KubeDB does not allow `MONGO_INITDB_ROOT_USERNAME` and `MONGO_INITDB_ROOT_PASSWORD` environment variables to set in `spec.podTemplate.spec.env`. If you want to use custom superuser and password, please use `spec.authSecret` instead described earlier. + +If you try to set `MONGO_INITDB_ROOT_USERNAME` or `MONGO_INITDB_ROOT_PASSWORD` environment variable in RabbitMQ crd, Kubedb operator will reject the request with following error, + +```ini +Error from server (Forbidden): error when creating "./RabbitMQ.yaml": admission webhook "RabbitMQ.validators.kubedb.com" denied the request: environment variable MONGO_INITDB_ROOT_USERNAME is forbidden to use in RabbitMQ spec +``` + +Also, note that KubeDB does not allow updating the environment variables as updating them does not have any effect once the database is created. If you try to update environment variables, KubeDB operator will reject the request with following error, + +```ini +Error from server (BadRequest): error when applying patch: +... +for: "./RabbitMQ.yaml": admission webhook "RabbitMQ.validators.kubedb.com" denied the request: precondition failed for: +...At least one of the following was changed: + apiVersion + kind + name + namespace + spec.ReplicaSet + spec.authSecret + spec.init + spec.storageType + spec.storage + spec.podTemplate.spec.nodeSelector + spec.podTemplate.spec.env +``` + +#### spec.podTemplate.spec.imagePullSecret + +`KubeDB` provides the flexibility of deploying RabbitMQ database from a private Docker registry. `spec.podTemplate.spec.imagePullSecrets` is an optional field that points to secrets to be used for pulling docker image if you are using a private docker registry. To learn how to deploy RabbitMQ from a private registry, please visit [here](/docs/guides/RabbitMQ/private-registry/using-private-registry.md). + +#### spec.podTemplate.spec.nodeSelector + +`spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) . + +#### spec.podTemplate.spec.serviceAccountName + +`serviceAccountName` is an optional field supported by KubeDB Operator (version 0.13.0 and higher) that can be used to specify a custom service account to fine tune role based access control. + +If this field is left empty, the KubeDB operator will create a service account name matching RabbitMQ crd name. Role and RoleBinding that provide necessary access permissions will also be generated automatically for this service account. + +If a service account name is given, but there's no existing service account by that name, the KubeDB operator will create one, and Role and RoleBinding that provide necessary access permissions will also be generated for this service account. + +If a service account name is given, and there's an existing service account by that name, the KubeDB operator will use that existing service account. Since this service account is not managed by KubeDB, users are responsible for providing necessary access permissions manually. Follow the guide [here](/docs/guides/RabbitMQ/custom-rbac/using-custom-rbac.md) to grant necessary permissions in this scenario. + +#### spec.podTemplate.spec.resources + +`spec.podTemplate.spec.resources` is an optional field. This can be used to request compute resources required by the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/). + +### spec.serviceTemplates + +You can also provide template for the services created by KubeDB operator for RabbitMQ database through `spec.serviceTemplates`. This will allow you to set the type and other properties of the services. + +KubeDB allows following fields to set in `spec.serviceTemplates`: +- `alias` represents the identifier of the service. It has the following possible value: + - `primary` is used for the primary service identification. + - `standby` is used for the secondary service identification. + - `stats` is used for the exporter service identification. +- metadata: + - labels + - annotations +- spec: + - type + - ports + - clusterIP + - externalIPs + - loadBalancerIP + - loadBalancerSourceRanges + - externalTrafficPolicy + - healthCheckNodePort + - sessionAffinityConfig + +See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.21.1/api/v1/types.go#L237) to understand these fields in detail. + +### spec.terminationPolicy + +`terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `RabbitMQ` crd or which resources KubeDB should keep or delete when you delete `RabbitMQ` crd. KubeDB provides following four termination policies: + +- DoNotTerminate +- Halt +- Delete (`Default`) +- WipeOut + +When `terminationPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, `DoNotTerminate` prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. + +Following table show what KubeDB does when you delete RabbitMQ crd for different termination policies, + +| Behavior | DoNotTerminate | Halt | Delete | WipeOut | +| ----------------------------------- | :------------: | :------: | :------: | :------: | +| 1. Block Delete operation | ✓ | ✗ | ✗ | ✗ | +| 2. Delete StatefulSet | ✗ | ✓ | ✓ | ✓ | +| 3. Delete Services | ✗ | ✓ | ✓ | ✓ | +| 4. Delete PVCs | ✗ | ✗ | ✓ | ✓ | +| 5. Delete Secrets | ✗ | ✗ | ✗ | ✓ | +| 6. Delete Snapshots | ✗ | ✗ | ✗ | ✓ | +| 7. Delete Snapshot data from bucket | ✗ | ✗ | ✗ | ✓ | + +If you don't specify `spec.terminationPolicy` KubeDB uses `Delete` termination policy by default. + +### spec.halted +Indicates that the database is halted and all offshoot Kubernetes resources except PVCs are deleted. + +### spec.arbiter +If `spec.arbiter` is not null, there will be one arbiter pod on each of the replicaset structure, including shards. It has two fields. +- `spec.arbiter.podTemplate` defines the arbiter-pod's template. See [spec.podTemplate](/docs/guides/RabbitMQ/configuration/using-config-file.md) part for more details of this. +- `spec.arbiter.configSecret` is an optional field that allows users to provide custom configurations for RabbitMQ arbiter. You just need to refer the configuration secret in `spec.arbiter.configSecret.name` field. +> Please note that, the secret key needs to be `mongod.conf`. + +N.B. If `spec.replicaset` & `spec.shardTopology` both is empty, `spec.arbiter` has to be empty too. + +### spec.allowedSchemas +It defines which consumers may refer to a database instance. We implemented double-optIn feature between database instance and schema-manager using this field. +- `spec.allowedSchemas.namespace.from` indicates how you want to filter the namespaces, from which a schema-manager will be able to communicate with this db instance. +Possible values are : i) `All` to allow all namespaces, ii) `Same` to allow only if schema-manager & RabbitMQ is deployed in same namespace & iii) `Selector` to select some namespaces through labels. +- `spec.allowedSchemas.namespace.selector`. You need to set this field only if `spec.allowedSchemas.namespace.from` is set to `selector`. Here you will give the labels of the namespaces to allow. +- `spec.allowedSchemas.selctor` denotes the labels of the schema-manager instances, which you want to give allowance to use this database. + +### spec.coordinator +We use a dedicated container, named `replication-mode-detector`, to continuously select primary pod and add label as primary. By specifying `spec.coordinator.resources` & `spec.coordinator.securityContext`, you can set the resources and securityContext of that mode-detector container. + + +## spec.healthChecker +It defines the attributes for the health checker. +- `spec.healthChecker.periodSeconds` specifies how often to perform the health check. +- `spec.healthChecker.timeoutSeconds` specifies the number of seconds after which the probe times out. +- `spec.healthChecker.failureThreshold` specifies minimum consecutive failures for the healthChecker to be considered failed. +- `spec.healthChecker.disableWriteCheck` specifies whether to disable the writeCheck or not. + +Know details about KubeDB Health checking from this [blog post](https://appscode.com/blog/post/kubedb-health-checker/). + +## Next Steps + +- Learn how to use KubeDB to run a RabbitMQ database [here](/docs/guides/RabbitMQ/README.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/rabbitmq/monitoring/_index.md b/docs/guides/rabbitmq/monitoring/_index.md new file mode 100755 index 0000000000..990a7b893d --- /dev/null +++ b/docs/guides/rabbitmq/monitoring/_index.md @@ -0,0 +1,10 @@ +--- +title: Monitoring RabbitMQ +menu: + docs_{{ .version }}: + identifier: mg-monitoring-RabbitMQ + name: Monitoring + parent: mg-RabbitMQ-guides + weight: 50 +menu_name: docs_{{ .version }} +--- diff --git a/docs/guides/rabbitmq/monitoring/overview.md b/docs/guides/rabbitmq/monitoring/overview.md new file mode 100644 index 0000000000..4fe47e119b --- /dev/null +++ b/docs/guides/rabbitmq/monitoring/overview.md @@ -0,0 +1,105 @@ +--- +title: RabbitMQ Monitoring Overview +description: RabbitMQ Monitoring Overview +menu: + docs_{{ .version }}: + identifier: mg-monitoring-overview + name: Overview + parent: mg-monitoring-RabbitMQ + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Monitoring RabbitMQ with KubeDB + +KubeDB has native support for monitoring via [Prometheus](https://prometheus.io/). You can use builtin [Prometheus](https://github.com/prometheus/prometheus) scraper or [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) to monitor KubeDB managed databases. This tutorial will show you how database monitoring works with KubeDB and how to configure Database crd to enable monitoring. + +## Overview + +KubeDB uses Prometheus [exporter](https://prometheus.io/docs/instrumenting/exporters/#databases) images to export Prometheus metrics for respective databases. Following diagram shows the logical flow of database monitoring with KubeDB. + +

+  Database Monitoring Flow +

+ +When a user creates a database crd with `spec.monitor` section configured, KubeDB operator provisions the respective database and injects an exporter image as sidecar to the database pod. It also creates a dedicated stats service with name `{database-crd-name}-stats` for monitoring. Prometheus server can scrape metrics using this stats service. + +## Configure Monitoring + +In order to enable monitoring for a database, you have to configure `spec.monitor` section. KubeDB provides following options to configure `spec.monitor` section: + +| Field | Type | Uses | +| -------------------------------------------------- | ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | +| `spec.monitor.agent` | `Required` | Type of the monitoring agent that will be used to monitor this database. It can be `prometheus.io/builtin` or `prometheus.io/operator`. | +| `spec.monitor.prometheus.exporter.port` | `Optional` | Port number where the exporter side car will serve metrics. | +| `spec.monitor.prometheus.exporter.args` | `Optional` | Arguments to pass to the exporter sidecar. | +| `spec.monitor.prometheus.exporter.env` | `Optional` | List of environment variables to set in the exporter sidecar container. | +| `spec.monitor.prometheus.exporter.resources` | `Optional` | Resources required by exporter sidecar container. | +| `spec.monitor.prometheus.exporter.securityContext` | `Optional` | Security options the exporter should run with. | +| `spec.monitor.prometheus.serviceMonitor.labels` | `Optional` | Labels for `ServiceMonitor` crd. | +| `spec.monitor.prometheus.serviceMonitor.interval` | `Optional` | Interval at which metrics should be scraped. | + +## Sample Configuration + +A sample YAML for RabbitMQ crd with `spec.monitor` section configured to enable monitoring with [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) is shown below. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: sample-mongo + namespace: databases +spec: + version: "4.4.26" + terminationPolicy: WipeOut + configSecret: + name: config + storageType: Durable + storage: + storageClassName: default + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 5Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + exporter: + args: + - --collect.database + env: + - name: ENV_VARIABLE + valueFrom: + secretKeyRef: + name: env_name + key: env_value + resources: + requests: + memory: 512Mi + cpu: 200m + limits: + memory: 512Mi + cpu: 250m + securityContext: + runAsUser: 2000 + allowPrivilegeEscalation: false +``` + +Here, we have specified that we are going to monitor this server using Prometheus operator through `spec.monitor.agent: prometheus.io/operator`. KubeDB will create a `ServiceMonitor` crd in databases namespace and this `ServiceMonitor` will have `release: prometheus` label. + +One thing to note that, we internally use `--collect-all` args, if the RabbitMQ exporter version >= v0.31.0 . You can check the exporter version by getting the mgversion object, like this, +`kubectl get mgversion -o=jsonpath='{.spec.exporter.image}' 4.4.26` +In that case, specifying args to collect something (as we used `--collect.database` above) will not have any effect. + +## Next Steps + +- Learn how to monitor RabbitMQ database with KubeDB using [builtin-Prometheus](/docs/guides/RabbitMQ/monitoring/using-builtin-prometheus.md) +- Learn how to monitor RabbitMQ database with KubeDB using [Prometheus operator](/docs/guides/RabbitMQ/monitoring/using-prometheus-operator.md). + diff --git a/docs/guides/rabbitmq/monitoring/using-builtin-prometheus.md b/docs/guides/rabbitmq/monitoring/using-builtin-prometheus.md new file mode 100644 index 0000000000..d9e48c54d9 --- /dev/null +++ b/docs/guides/rabbitmq/monitoring/using-builtin-prometheus.md @@ -0,0 +1,359 @@ +--- +title: Monitor RabbitMQ using Builtin Prometheus Discovery +menu: + docs_{{ .version }}: + identifier: mg-using-builtin-prometheus-monitoring + name: Builtin Prometheus + parent: mg-monitoring-RabbitMQ + weight: 20 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Monitoring RabbitMQ with builtin Prometheus + +This tutorial will show you how to monitor RabbitMQ database using builtin [Prometheus](https://github.com/prometheus/prometheus) scraper. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). + +- If you are not familiar with how to configure Prometheus to scrape metrics from various Kubernetes resources, please read the tutorial from [here](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/guides/RabbitMQ/monitoring/overview.md). + +- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy respective monitoring resources. We are going to deploy database in `demo` namespace. + + ```bash + $ kubectl create ns monitoring + namespace/monitoring created + + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/RabbitMQ) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy RabbitMQ with Monitoring Enabled + +At first, let's deploy an RabbitMQ database with monitoring enabled. Below is the RabbitMQ object that we are going to create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: builtin-prom-mgo + namespace: demo +spec: + version: "4.4.26" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/builtin +``` + +Here, + +- `spec.monitor.agent: prometheus.io/builtin` specifies that we are going to monitor this server using builtin Prometheus scraper. + +Let's create the RabbitMQ crd we have shown above. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/monitoring/builtin-prom-mgo.yaml +RabbitMQ.kubedb.com/builtin-prom-mgo created +``` + +Now, wait for the database to go into `Running` state. + +```bash +$ kubectl get mg -n demo builtin-prom-mgo +NAME VERSION STATUS AGE +builtin-prom-mgo 4.4.26 Ready 2m34s +``` + +KubeDB will create a separate stats service with name `{RabbitMQ crd name}-stats` for monitoring purpose. + +```bash +$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=builtin-prom-mgo" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +builtin-prom-mgo ClusterIP 10.99.28.40 27017/TCP 55s +builtin-prom-mgo-pods ClusterIP None 27017/TCP 55s +builtin-prom-mgo-stats ClusterIP 10.98.202.26 56790/TCP 36s +``` + +Here, `builtin-prom-mgo-stats` service has been created for monitoring purpose. Let's describe the service. + +```bash +$ kubectl describe svc -n demo builtin-prom-mgo-stats +Name: builtin-prom-mgo-stats +Namespace: demo +Labels: app.kubernetes.io/name=RabbitMQs.kubedb.com + app.kubernetes.io/instance=builtin-prom-mgo +Annotations: monitoring.appscode.com/agent: prometheus.io/builtin + prometheus.io/path: /metrics + prometheus.io/port: 56790 + prometheus.io/scrape: true +Selector: app.kubernetes.io/name=RabbitMQs.kubedb.com,app.kubernetes.io/instance=builtin-prom-mgo +Type: ClusterIP +IP: 10.98.202.26 +Port: prom-http 56790/TCP +TargetPort: prom-http/TCP +Endpoints: 172.17.0.7:56790 +Session Affinity: None +Events: +``` + +You can see that the service contains following annotations. + +```bash +prometheus.io/path: /metrics +prometheus.io/port: 56790 +prometheus.io/scrape: true +``` + +The Prometheus server will discover the service endpoint using these specifications and will scrape metrics from the exporter. + +## Configure Prometheus Server + +Now, we have to configure a Prometheus scraping job to scrape the metrics using this service. We are going to configure scraping job similar to this [kubernetes-service-endpoints](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin#kubernetes-service-endpoints) job that scrapes metrics from endpoints of a service. + +Let's configure a Prometheus scraping job to collect metrics from this service. + +```yaml +- job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +### Configure Existing Prometheus Server + +If you already have a Prometheus server running, you have to add above scraping job in the `ConfigMap` used to configure the Prometheus server. Then, you have to restart it for the updated configuration to take effect. + +>If you don't use a persistent volume for Prometheus storage, you will lose your previously scraped data on restart. + +### Deploy New Prometheus Server + +If you don't have any existing Prometheus server running, you have to deploy one. In this section, we are going to deploy a Prometheus server in `monitoring` namespace to collect metrics using this stats service. + +**Create ConfigMap:** + +At first, create a ConfigMap with the scraping configuration. Bellow, the YAML of ConfigMap that we are going to create in this tutorial. + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: prometheus-config + labels: + app: prometheus-demo + namespace: monitoring +data: + prometheus.yml: |- + global: + scrape_interval: 5s + evaluation_interval: 5s + scrape_configs: + - job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +Let's create above `ConfigMap`, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/monitoring/builtin-prometheus/prom-config.yaml +configmap/prometheus-config created +``` + +**Create RBAC:** + +If you are using an RBAC enabled cluster, you have to give necessary RBAC permissions for Prometheus. Let's create necessary RBAC stuffs for Prometheus, + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/rbac.yaml +clusterrole.rbac.authorization.k8s.io/prometheus created +serviceaccount/prometheus created +clusterrolebinding.rbac.authorization.k8s.io/prometheus created +``` + +>YAML for the RBAC resources created above can be found [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/rbac.yaml). + +**Deploy Prometheus:** + +Now, we are ready to deploy Prometheus server. We are going to use following [deployment](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/deployment.yaml) to deploy Prometheus server. + +Let's deploy the Prometheus server. + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/deployment.yaml +deployment.apps/prometheus created +``` + +### Verify Monitoring Metrics + +Prometheus server is listening to port `9090`. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +At first, let's check if the Prometheus pod is in `Running` state. + +```bash +$ kubectl get pod -n monitoring -l=app=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-7bd56c6865-8dlpv 1/1 Running 0 28s +``` + +Now, run following command on a separate terminal to forward 9090 port of `prometheus-7bd56c6865-8dlpv` pod, + +```bash +$ kubectl port-forward -n monitoring prometheus-7bd56c6865-8dlpv 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see the endpoint of `builtin-prom-mgo-stats` service as one of the targets. + +

+  Prometheus Target +

+ +Check the labels marked with red rectangle. These labels confirm that the metrics are coming from `RabbitMQ` database `builtin-prom-mgo` through stats service `builtin-prom-mgo-stats`. + +Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run following commands + +```bash +kubectl delete -n demo mg/builtin-prom-mgo + +kubectl delete -n monitoring deployment.apps/prometheus + +kubectl delete -n monitoring clusterrole.rbac.authorization.k8s.io/prometheus +kubectl delete -n monitoring serviceaccount/prometheus +kubectl delete -n monitoring clusterrolebinding.rbac.authorization.k8s.io/prometheus + +kubectl delete ns demo +kubectl delete ns monitoring +``` + +## Next Steps + +- Learn about [backup and restore](/docs/guides/RabbitMQ/backup/overview/index.md) RabbitMQ database using Stash. +- Learn how to configure [RabbitMQ Topology](/docs/guides/RabbitMQ/clustering/sharding.md). +- Monitor your RabbitMQ database with KubeDB using [`out-of-the-box` Prometheus operator](/docs/guides/RabbitMQ/monitoring/using-prometheus-operator.md). +- Use [private Docker registry](/docs/guides/RabbitMQ/private-registry/using-private-registry.md) to deploy RabbitMQ with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/rabbitmq/monitoring/using-prometheus-operator.md b/docs/guides/rabbitmq/monitoring/using-prometheus-operator.md new file mode 100644 index 0000000000..9cc03afc5e --- /dev/null +++ b/docs/guides/rabbitmq/monitoring/using-prometheus-operator.md @@ -0,0 +1,323 @@ +--- +title: Monitor RabbitMQ using Prometheus Operator +menu: + docs_{{ .version }}: + identifier: mg-using-prometheus-operator-monitoring + name: Prometheus Operator + parent: mg-monitoring-RabbitMQ + weight: 15 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Monitoring RabbitMQ Using Prometheus operator + +[Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) provides simple and Kubernetes native way to deploy and configure Prometheus server. This tutorial will show you how to use Prometheus operator to monitor RabbitMQ database deployed with KubeDB. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/guides/RabbitMQ/monitoring/overview.md). + +- We need a [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) instance running. If you don't already have a running instance, you can deploy one using this helm chart [here](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack). + +- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy the prometheus operator helm chart. We are going to deploy database in `demo` namespace. + + ```bash + $ kubectl create ns monitoring + namespace/monitoring created + + $ kubectl create ns demo + namespace/demo created + ``` + + + +> Note: YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/RabbitMQ) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Find out required labels for ServiceMonitor + +We need to know the labels used to select `ServiceMonitor` by a `Prometheus` crd. We are going to provide these labels in `spec.monitor.prometheus.serviceMonitor.labels` field of RabbitMQ crd so that KubeDB creates `ServiceMonitor` object accordingly. + +At first, let's find out the available Prometheus server in our cluster. + +```bash +$ kubectl get prometheus --all-namespaces +NAMESPACE NAME VERSION REPLICAS AGE +monitoring prometheus-kube-prometheus-prometheus v2.39.0 1 13d +``` + +> If you don't have any Prometheus server running in your cluster, deploy one following the guide specified in **Before You Begin** section. + +Now, let's view the YAML of the available Prometheus server `prometheus` in `monitoring` namespace. + +```yaml +$ kubectl get prometheus -n monitoring prometheus-kube-prometheus-prometheus -o yaml +apiVersion: monitoring.coreos.com/v1 +kind: Prometheus +metadata: + annotations: + meta.helm.sh/release-name: prometheus + meta.helm.sh/release-namespace: monitoring + creationTimestamp: "2022-10-11T07:12:20Z" + generation: 1 + labels: + app: kube-prometheus-stack-prometheus + app.kubernetes.io/instance: prometheus + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/part-of: kube-prometheus-stack + app.kubernetes.io/version: 40.5.0 + chart: kube-prometheus-stack-40.5.0 + heritage: Helm + release: prometheus + name: prometheus-kube-prometheus-prometheus + namespace: monitoring + resourceVersion: "490475" + uid: 7e36caf3-228a-40f3-bff9-a1c0c78dedb0 +spec: + alerting: + alertmanagers: + - apiVersion: v2 + name: prometheus-kube-prometheus-alertmanager + namespace: monitoring + pathPrefix: / + port: http-web + enableAdminAPI: false + evaluationInterval: 30s + externalUrl: http://prometheus-kube-prometheus-prometheus.monitoring:9090 + image: quay.io/prometheus/prometheus:v2.39.0 + listenLocal: false + logFormat: logfmt + logLevel: info + paused: false + podMonitorNamespaceSelector: {} + podMonitorSelector: + matchLabels: + release: prometheus + portName: http-web + probeNamespaceSelector: {} + probeSelector: + matchLabels: + release: prometheus + replicas: 1 + retention: 10d + routePrefix: / + ruleNamespaceSelector: {} + ruleSelector: + matchLabels: + release: prometheus + scrapeInterval: 30s + securityContext: + fsGroup: 2000 + runAsGroup: 2000 + runAsNonRoot: true + runAsUser: 1000 + serviceAccountName: prometheus-kube-prometheus-prometheus + serviceMonitorNamespaceSelector: {} + serviceMonitorSelector: + matchLabels: + release: prometheus + shards: 1 + version: v2.39.0 + walCompression: true +``` + +Notice the `spec.serviceMonitorSelector` section. Here, `release: prometheus` label is used to select `ServiceMonitor` crd. So, we are going to use this label in `spec.monitor.prometheus.serviceMonitor.labels` field of RabbitMQ crd. + +## Deploy RabbitMQ with Monitoring Enabled + +At first, let's deploy an RabbitMQ database with monitoring enabled. Below is the RabbitMQ object that we are going to create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: coreos-prom-mgo + namespace: demo +spec: + version: "4.4.26" + terminationPolicy: WipeOut + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + monitor: + agent: prometheus.io/operator + prometheus: + serviceMonitor: + labels: + release: prometheus + interval: 10s +``` + +Here, + +- `monitor.agent: prometheus.io/operator` indicates that we are going to monitor this server using Prometheus operator. +- `monitor.prometheus.serviceMonitor.labels` specifies that KubeDB should create `ServiceMonitor` with these labels. +- `monitor.prometheus.interval` indicates that the Prometheus server should scrape metrics from this database with 10 seconds interval. + +Let's create the RabbitMQ object that we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/monitoring/coreos-prom-mgo.yaml +RabbitMQ.kubedb.com/coreos-prom-mgo created +``` + +Now, wait for the database to go into `Running` state. + +```bash +$ kubectl get mg -n demo coreos-prom-mgo +NAME VERSION STATUS AGE +coreos-prom-mgo 4.4.26 Ready 34s +``` + +KubeDB will create a separate stats service with name `{RabbitMQ crd name}-stats` for monitoring purpose. + +```bash +$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=coreos-prom-mgo" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +coreos-prom-mgo ClusterIP 10.96.150.171 27017/TCP 84s +coreos-prom-mgo-pods ClusterIP None 27017/TCP 84s +coreos-prom-mgo-stats ClusterIP 10.96.218.41 56790/TCP 64s +``` + +Here, `coreos-prom-mgo-stats` service has been created for monitoring purpose. + +Let's describe this stats service. + +```yaml +$ kubectl describe svc -n demo coreos-prom-mgo-stats +Name: coreos-prom-mgo-stats +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=coreos-prom-mgo + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=RabbitMQs.kubedb.com + kubedb.com/role=stats +Annotations: monitoring.appscode.com/agent: prometheus.io/operator +Selector: app.kubernetes.io/instance=coreos-prom-mgo,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=RabbitMQs.kubedb.com +Type: ClusterIP +IP Family Policy: SingleStack +IP Families: IPv4 +IP: 10.96.240.52 +IPs: 10.96.240.52 +Port: metrics 56790/TCP +TargetPort: metrics/TCP +Endpoints: 10.244.0.149:56790 +Session Affinity: None +Events: + +``` + +Notice the `Labels` and `Port` fields. `ServiceMonitor` will use this information to target its endpoints. + +KubeDB will also create a `ServiceMonitor` crd in `demo` namespace that select the endpoints of `coreos-prom-mgo-stats` service. Verify that the `ServiceMonitor` crd has been created. + +```bash +$ kubectl get servicemonitor -n demo +NAME AGE +coreos-prom-mgo-stats 2m40s +``` + +Let's verify that the `ServiceMonitor` has the label that we had specified in `spec.monitor` section of RabbitMQ crd. + +```yaml +$ kubectl get servicemonitor -n demo coreos-prom-mgo-stats -o yaml +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + creationTimestamp: "2022-10-24T11:51:08Z" + generation: 1 + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: coreos-prom-mgo + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: RabbitMQs.kubedb.com + release: prometheus + name: coreos-prom-mgo-stats + namespace: demo + ownerReferences: + - apiVersion: v1 + blockOwnerDeletion: true + controller: true + kind: Service + name: coreos-prom-mgo-stats + uid: 68b0e8c4-cba4-4dcb-9016-4e1901ca1fd0 + resourceVersion: "528373" + uid: 56eb596b-d2cf-4d2c-a204-c43dbe8fe896 +spec: + endpoints: + - bearerTokenSecret: + key: "" + honorLabels: true + interval: 10s + path: /metrics + port: metrics + namespaceSelector: + matchNames: + - demo + selector: + matchLabels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: coreos-prom-mgo + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: RabbitMQs.kubedb.com + kubedb.com/role: stats +``` + +Notice that the `ServiceMonitor` has label `release: prometheus` that we had specified in RabbitMQ crd. + +Also notice that the `ServiceMonitor` has selector which match the labels we have seen in the `coreos-prom-mgo-stats` service. It also, target the `metrics` port that we have seen in the stats service. + +## Verify Monitoring Metrics + +At first, let's find out the respective Prometheus pod for `prometheus` Prometheus server. + +```bash +$ kubectl get pod -n monitoring -l=app.kubernetes.io/name=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 1 13d +``` + +Prometheus server is listening to port `9090` of `prometheus-prometheus-kube-prometheus-prometheus-0` pod. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +Run following command on a separate terminal to forward the port 9090 of `prometheus-prometheus-kube-prometheus-prometheus-0` pod, + +```bash +$ kubectl port-forward -n monitoring prometheus-prometheus-kube-prometheus-prometheus-0 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see `metrics` endpoint of `coreos-prom-mgo-stats` service as one of the targets. + +

+  Prometheus Target +

+ +Check the `endpoint` and `service` labels marked by the red rectangles. It verifies that the target is our expected database. Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create a beautiful dashboard with collected metrics. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run following commands + +```bash +kubectl delete -n demo mg/coreos-prom-mgo +kubectl delete ns demo +``` + +## Next Steps + +- Monitor your RabbitMQ database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/RabbitMQ/monitoring/using-builtin-prometheus.md). +- Detail concepts of [RabbitMQ object](/docs/guides/RabbitMQ/concepts/RabbitMQ.md). +- Detail concepts of [RabbitMQVersion object](/docs/guides/RabbitMQ/concepts/catalog.md). +- [Backup and Restore](/docs/guides/RabbitMQ/backup/overview/index.md) process of RabbitMQ databases using Stash. +- Initialize [RabbitMQ with Script](/docs/guides/RabbitMQ/initialization/using-script.md). +- Use [private Docker registry](/docs/guides/RabbitMQ/private-registry/using-private-registry.md) to deploy RabbitMQ with KubeDB. +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/rabbitmq/reconfigure-tls/_index.md b/docs/guides/rabbitmq/reconfigure-tls/_index.md new file mode 100644 index 0000000000..3e9d2e14a1 --- /dev/null +++ b/docs/guides/rabbitmq/reconfigure-tls/_index.md @@ -0,0 +1,10 @@ +--- +title: Reconfigure RabbitMQ TLS/SSL +menu: + docs_{{ .version }}: + identifier: mg-reconfigure-tls + name: Reconfigure TLS/SSL + parent: mg-RabbitMQ-guides + weight: 46 +menu_name: docs_{{ .version }} +--- diff --git a/docs/guides/rabbitmq/reconfigure-tls/overview.md b/docs/guides/rabbitmq/reconfigure-tls/overview.md new file mode 100644 index 0000000000..d9736e40da --- /dev/null +++ b/docs/guides/rabbitmq/reconfigure-tls/overview.md @@ -0,0 +1,54 @@ +--- +title: Reconfiguring TLS of RabbitMQ Database +menu: + docs_{{ .version }}: + identifier: mg-reconfigure-tls-overview + name: Overview + parent: mg-reconfigure-tls + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Reconfiguring TLS of RabbitMQ Database + +This guide will give an overview on how KubeDB Ops-manager operator reconfigures TLS configuration i.e. add TLS, remove TLS, update issuer/cluster issuer or Certificates and rotate the certificates of a `RabbitMQ` database. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + +## How Reconfiguring RabbitMQ TLS Configuration Process Works + +The following diagram shows how KubeDB Ops-manager operator reconfigures TLS of a `RabbitMQ` database. Open the image in a new tab to see the enlarged version. + +
+  Reconfiguring TLS process of RabbitMQ +
Fig: Reconfiguring TLS process of RabbitMQ
+
+ +The Reconfiguring RabbitMQ TLS process consists of the following steps: + +1. At first, a user creates a `RabbitMQ` Custom Resource Object (CRO). + +2. `KubeDB` Provisioner operator watches the `RabbitMQ` CRO. + +3. When the operator finds a `RabbitMQ` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to reconfigure the TLS configuration of the `RabbitMQ` database the user creates a `RabbitMQOpsRequest` CR with desired information. + +5. `KubeDB` Ops-manager operator watches the `RabbitMQOpsRequest` CR. + +6. When it finds a `RabbitMQOpsRequest` CR, it pauses the `RabbitMQ` object which is referred from the `RabbitMQOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `RabbitMQ` object during the reconfiguring TLS process. + +7. Then the `KubeDB` Ops-manager operator will add, remove, update or rotate TLS configuration based on the Ops Request yaml. + +8. Then the `KubeDB` Ops-manager operator will restart all the Pods of the database so that they restart with the new TLS configuration defined in the `RabbitMQOpsRequest` CR. + +9. After the successful reconfiguring of the `RabbitMQ` TLS, the `KubeDB` Ops-manager operator resumes the `RabbitMQ` object so that the `KubeDB` Provisioner operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on reconfiguring TLS configuration of a RabbitMQ database using `RabbitMQOpsRequest` CRD. \ No newline at end of file diff --git a/docs/guides/rabbitmq/reconfigure-tls/reconfigure-tls.md b/docs/guides/rabbitmq/reconfigure-tls/reconfigure-tls.md new file mode 100644 index 0000000000..8bea83e026 --- /dev/null +++ b/docs/guides/rabbitmq/reconfigure-tls/reconfigure-tls.md @@ -0,0 +1,1006 @@ +--- +title: Reconfigure RabbitMQ TLS/SSL Encryption +menu: + docs_{{ .version }}: + identifier: mg-reconfigure-tls-rs + name: Reconfigure RabbitMQ TLS/SSL Encryption + parent: mg-reconfigure-tls + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Reconfigure RabbitMQ TLS/SSL (Transport Encryption) + +KubeDB supports reconfigure i.e. add, remove, update and rotation of TLS/SSL certificates for existing RabbitMQ database via a RabbitMQOpsRequest. This tutorial will show you how to use KubeDB to reconfigure TLS/SSL encryption. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.0.0 or later to your cluster to manage your SSL/TLS certificates. + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/RabbitMQ) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Add TLS to a RabbitMQ database + +Here, We are going to create a RabbitMQ database without TLS and then reconfigure the database to use TLS. + +### Deploy RabbitMQ without TLS + +In this section, we are going to deploy a RabbitMQ Replicaset database without TLS. In the next few sections we will reconfigure TLS using `RabbitMQOpsRequest` CRD. Below is the YAML of the `RabbitMQ` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mg-rs + namespace: demo +spec: + version: "4.4.26" + replicas: 3 + replicaSet: + name: rs0 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the `RabbitMQ` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/reconfigure-tls/mg-replicaset.yaml +RabbitMQ.kubedb.com/mg-rs created +``` + +Now, wait until `mg-replicaset` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-rs 4.4.26 Ready 10m + +$ kubectl dba describe RabbitMQ mg-rs -n demo +Name: mg-rs +Namespace: demo +CreationTimestamp: Thu, 11 Mar 2021 13:25:05 +0600 +Labels: +Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"kubedb.com/v1alpha2","kind":"RabbitMQ","metadata":{"annotations":{},"name":"mg-rs","namespace":"demo"},"spec":{"replicaSet":{"name":"rs0"... +Replicas: 3 total +Status: Ready +StorageType: Durable +Volume: + StorageClass: standard + Capacity: 1Gi + Access Modes: RWO +Paused: false +Halted: false +Termination Policy: Delete + +StatefulSet: + Name: mg-rs + CreationTimestamp: Thu, 11 Mar 2021 13:25:05 +0600 + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mg-rs + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=RabbitMQs.kubedb.com + Annotations: + Replicas: 824639275080 desired | 3 total + Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed + +Service: + Name: mg-rs + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mg-rs + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=RabbitMQs.kubedb.com + Annotations: + Type: ClusterIP + IP: 10.96.70.27 + Port: primary 27017/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.63:27017 + +Service: + Name: mg-rs-pods + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mg-rs + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=RabbitMQs.kubedb.com + Annotations: + Type: ClusterIP + IP: None + Port: db 27017/TCP + TargetPort: db/TCP + Endpoints: 10.244.0.63:27017,10.244.0.65:27017,10.244.0.67:27017 + +Auth Secret: + Name: mg-rs-auth + Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=mg-rs + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=RabbitMQs.kubedb.com + Annotations: + Type: Opaque + Data: + password: 16 bytes + username: 4 bytes + +AppBinding: + Metadata: + Annotations: + kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"kubedb.com/v1alpha2","kind":"RabbitMQ","metadata":{"annotations":{},"name":"mg-rs","namespace":"demo"},"spec":{"replicaSet":{"name":"rs0"},"replicas":3,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"version":"4.4.26"}} + + Creation Timestamp: 2021-03-11T07:26:44Z + Labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: mg-rs + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: RabbitMQs.kubedb.com + Name: mg-rs + Namespace: demo + Spec: + Client Config: + Service: + Name: mg-rs + Port: 27017 + Scheme: RabbitMQ + Parameters: + API Version: config.kubedb.com/v1alpha1 + Kind: MongoConfiguration + Replica Sets: + host-0: rs0/mg-rs-0.mg-rs-pods.demo.svc,mg-rs-1.mg-rs-pods.demo.svc,mg-rs-2.mg-rs-pods.demo.svc + Stash: + Addon: + Backup Task: + Name: RabbitMQ-backup-4.4.6-v6 + Restore Task: + Name: RabbitMQ-restore-4.4.6-v6 + Secret: + Name: mg-rs-auth + Type: kubedb.com/RabbitMQ + Version: 4.4.26 + +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Successful 14m RabbitMQ operator Successfully created stats service + Normal Successful 14m RabbitMQ operator Successfully created Service + Normal Successful 14m RabbitMQ operator Successfully stats service + Normal Successful 14m RabbitMQ operator Successfully stats service + Normal Successful 13m RabbitMQ operator Successfully stats service + Normal Successful 13m RabbitMQ operator Successfully stats service + Normal Successful 13m RabbitMQ operator Successfully stats service + Normal Successful 13m RabbitMQ operator Successfully stats service + Normal Successful 13m RabbitMQ operator Successfully stats service + Normal Successful 12m RabbitMQ operator Successfully stats service + Normal Successful 12m RabbitMQ operator Successfully patched StatefulSet demo/mg-rs +``` + +Now, we can connect to this database through [mongo-shell](https://docs.RabbitMQ.com/v4.2/mongo/) and verify that the TLS is disabled. + + +```bash +$ kubectl get secrets -n demo mg-rs-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo mg-rs-auth -o jsonpath='{.data.\password}' | base64 -d +U6(h_pYrekLZ2OOd + +$ kubectl exec -it mg-rs-0 -n demo -- mongo admin -u root -p 'U6(h_pYrekLZ2OOd' +rs0:PRIMARY> db.adminCommand({ getParameter:1, sslMode:1 }) +{ + "sslMode" : "disabled", + "ok" : 1, + "$clusterTime" : { + "clusterTime" : Timestamp(1615468344, 1), + "signature" : { + "hash" : BinData(0,"Xdclj9Y67WKZ/oTDGT/E1XzOY28="), + "keyId" : NumberLong("6938294279689207810") + } + }, + "operationTime" : Timestamp(1615468344, 1) +} +``` + +We can verify from the above output that TLS is disabled for this database. + +### Create Issuer/ ClusterIssuer + +Now, We are going to create an example `Issuer` that will be used to enable SSL/TLS in RabbitMQ. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. + +- Start off by generating a ca certificates using openssl. + +```bash +$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=ca/O=kubedb" +Generating a RSA private key +................+++++ +........................+++++ +writing new private key to './ca.key' +----- +``` + +- Now we are going to create a ca-secret using the certificate files that we have just generated. + +```bash +$ kubectl create secret tls mongo-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +secret/mongo-ca created +``` + +Now, Let's create an `Issuer` using the `mongo-ca` secret that we have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: mg-issuer + namespace: demo +spec: + ca: + secretName: mongo-ca +``` + +Let's apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/reconfigure-tls/issuer.yaml +issuer.cert-manager.io/mg-issuer created +``` + +### Create RabbitMQOpsRequest + +In order to add TLS to the database, we have to create a `RabbitMQOpsRequest` CRO with our created issuer. Below is the YAML of the `RabbitMQOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-add-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mg-rs + tls: + issuerRef: + name: mg-issuer + kind: Issuer + apiGroup: "cert-manager.io" + certificates: + - alias: client + subject: + organizations: + - mongo + organizationalUnits: + - client + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `mg-rs` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. +- `spec.tls.certificates` specifies the certificates. You can learn more about this field from [here](/docs/guides/RabbitMQ/concepts/RabbitMQ.md#spectls). + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/reconfigure-tls/mops-add-tls.yaml +RabbitMQopsrequest.ops.kubedb.com/mops-add-tls created +``` + +#### Verify TLS Enabled Successfully + +Let's wait for `RabbitMQOpsRequest` to be `Successful`. Run the following command to watch `RabbitMQOpsRequest` CRO, + +```bash +$ kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-add-tls ReconfigureTLS Successful 91s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-add-tls +Name: mops-add-tls +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2021-03-11T13:32:18Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:tls: + .: + f:certificates: + f:issuerRef: + .: + f:apiGroup: + f:kind: + f:name: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-11T13:32:18Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-11T13:32:19Z + Resource Version: 488264 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/RabbitMQopsrequests/mops-add-tls + UID: 0024ec16-0d43-4686-a2d7-1cdeb96e41a5 +Spec: + Database Ref: + Name: mg-rs + Tls: + Certificates: + Alias: client + Subject: + Organizational Units: + client + Organizations: + mongo + Issuer Ref: + API Group: cert-manager.io + Kind: Issuer + Name: mg-issuer + Type: ReconfigureTLS +Status: + Conditions: + Last Transition Time: 2021-03-11T13:32:19Z + Message: RabbitMQ ops request is reconfiguring TLS + Observed Generation: 1 + Reason: ReconfigureTLS + Status: True + Type: ReconfigureTLS + Last Transition Time: 2021-03-11T13:32:25Z + Message: Successfully Updated StatefulSets + Observed Generation: 1 + Reason: TLSAdded + Status: True + Type: TLSAdded + Last Transition Time: 2021-03-11T13:34:25Z + Message: Successfully Restarted ReplicaSet nodes + Observed Generation: 1 + Reason: RestartReplicaSet + Status: True + Type: RestartReplicaSet + Last Transition Time: 2021-03-11T13:34:25Z + Message: Successfully Reconfigured TLS + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 2m10s KubeDB Ops-manager operator Pausing RabbitMQ demo/mg-rs + Normal PauseDatabase 2m10s KubeDB Ops-manager operator Successfully paused RabbitMQ demo/mg-rs + Normal TLSAdded 2m10s KubeDB Ops-manager operator Successfully Updated StatefulSets + Normal RestartReplicaSet 10s KubeDB Ops-manager operator Successfully Restarted ReplicaSet nodes + Normal ResumeDatabase 10s KubeDB Ops-manager operator Resuming RabbitMQ demo/mg-rs + Normal ResumeDatabase 10s KubeDB Ops-manager operator Successfully resumed RabbitMQ demo/mg-rs + Normal Successful 10s KubeDB Ops-manager operator Successfully Reconfigured TLS +``` + +Now, Let's exec into a database primary node and find out the username to connect in a mongo shell, + +```bash +$ kubectl exec -it mg-rs-2 -n demo bash +root@mgo-rs-tls-2:/$ ls /var/run/RabbitMQ/tls +ca.crt client.pem mongo.pem +root@mgo-rs-tls-2:/$ openssl x509 -in /var/run/RabbitMQ/tls/client.pem -inform PEM -subject -nameopt RFC2253 -noout +subject=CN=root,OU=client,O=mongo +``` + +Now, we can connect using `CN=root,OU=client,O=mongo` as root to connect to the mongo shell of the master pod, + +```bash +root@mgo-rs-tls-2:/$ mongo --tls --tlsCAFile /var/run/RabbitMQ/tls/ca.crt --tlsCertificateKeyFile /var/run/RabbitMQ/tls/client.pem admin --host localhost --authenticationMechanism RabbitMQ-X509 --authenticationDatabase='$external' -u "CN=root,OU=client,O=mongo" --quiet +rs0:PRIMARY> +``` + +We are connected to the mongo shell. Let's run some command to verify the sslMode and the user, + +```bash +rs0:PRIMARY> db.adminCommand({ getParameter:1, sslMode:1 }) +{ + "sslMode" : "requireSSL", + "ok" : 1, + "$clusterTime" : { + "clusterTime" : Timestamp(1615472249, 1), + "signature" : { + "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), + "keyId" : NumberLong(0) + } + }, + "operationTime" : Timestamp(1615472249, 1) +} +``` + +We can see from the above output that, `sslMode` is set to `requireSSL`. So, database TLS is enabled successfully to this database. + +## Rotate Certificate + +Now we are going to rotate the certificate of this database. First let's check the current expiration date of the certificate. + +```bash +$ kubectl exec -it mg-rs-2 -n demo bash +root@mg-rs-2:/# openssl x509 -in /var/run/RabbitMQ/tls/client.pem -inform PEM -enddate -nameopt RFC2253 -noout +notAfter=Jun 9 13:32:20 2021 GMT +``` + +So, the certificate will expire on this time `Jun 9 13:32:20 2021 GMT`. + +### Create RabbitMQOpsRequest + +Now we are going to increase it using a RabbitMQOpsRequest. Below is the yaml of the ops request that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-rotate + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mg-rs + tls: + rotateCertificates: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `mg-rs` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.rotateCertificates` specifies that we want to rotate the certificate of this database. + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/reconfigure-tls/mops-rotate.yaml +RabbitMQopsrequest.ops.kubedb.com/mops-rotate created +``` + +#### Verify Certificate Rotated Successfully + +Let's wait for `RabbitMQOpsRequest` to be `Successful`. Run the following command to watch `RabbitMQOpsRequest` CRO, + +```bash +$ kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-rotate ReconfigureTLS Successful 112s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-rotate +Name: mops-rotate +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2021-03-11T16:17:55Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:tls: + .: + f:rotateCertificates: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-11T16:17:55Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-11T16:17:55Z + Resource Version: 521643 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/RabbitMQopsrequests/mops-rotate + UID: 6d96ead2-a868-47d8-85fb-77eecc9a96b4 +Spec: + Database Ref: + Name: mg-rs + Tls: + Rotate Certificates: true + Type: ReconfigureTLS +Status: + Conditions: + Last Transition Time: 2021-03-11T16:17:55Z + Message: RabbitMQ ops request is reconfiguring TLS + Observed Generation: 1 + Reason: ReconfigureTLS + Status: True + Type: ReconfigureTLS + Last Transition Time: 2021-03-11T16:17:55Z + Message: Successfully Added Issuing Condition in Certificates + Observed Generation: 1 + Reason: IssuingConditionUpdated + Status: True + Type: IssuingConditionUpdated + Last Transition Time: 2021-03-11T16:18:00Z + Message: Successfully Issued New Certificates + Observed Generation: 1 + Reason: CertificateIssuingSuccessful + Status: True + Type: CertificateIssuingSuccessful + Last Transition Time: 2021-03-11T16:19:45Z + Message: Successfully Restarted ReplicaSet nodes + Observed Generation: 1 + Reason: RestartReplicaSet + Status: True + Type: RestartReplicaSet + Last Transition Time: 2021-03-11T16:19:45Z + Message: Successfully Reconfigured TLS + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal CertificateIssuingSuccessful 2m10s KubeDB Ops-manager operator Successfully Issued New Certificates + Normal RestartReplicaSet 25s KubeDB Ops-manager operator Successfully Restarted ReplicaSet nodes + Normal Successful 25s KubeDB Ops-manager operator Successfully Reconfigured TLS +``` + +Now, let's check the expiration date of the certificate. + +```bash +$ kubectl exec -it mg-rs-2 -n demo bash +root@mg-rs-2:/# openssl x509 -in /var/run/RabbitMQ/tls/client.pem -inform PEM -enddate -nameopt RFC2253 -noout +notAfter=Jun 9 16:17:55 2021 GMT +``` + +As we can see from the above output, the certificate has been rotated successfully. + +## Change Issuer/ClusterIssuer + +Now, we are going to change the issuer of this database. + +- Let's create a new ca certificate and key using a different subject `CN=ca-update,O=kubedb-updated`. + +```bash +$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=ca-updated/O=kubedb-updated" +Generating a RSA private key +..............................................................+++++ +......................................................................................+++++ +writing new private key to './ca.key' +----- +``` + +- Now we are going to create a new ca-secret using the certificate files that we have just generated. + +```bash +$ kubectl create secret tls mongo-new-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +secret/mongo-new-ca created +``` + +Now, Let's create a new `Issuer` using the `mongo-new-ca` secret that we have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: mg-new-issuer + namespace: demo +spec: + ca: + secretName: mongo-new-ca +``` + +Let's apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/reconfigure-tls/new-issuer.yaml +issuer.cert-manager.io/mg-new-issuer created +``` + +### Create RabbitMQOpsRequest + +In order to use the new issuer to issue new certificates, we have to create a `RabbitMQOpsRequest` CRO with the newly created issuer. Below is the YAML of the `RabbitMQOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-change-issuer + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mg-rs + tls: + issuerRef: + name: mg-new-issuer + kind: Issuer + apiGroup: "cert-manager.io" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `mg-rs` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/reconfigure-tls/mops-change-issuer.yaml +RabbitMQopsrequest.ops.kubedb.com/mops-change-issuer created +``` + +#### Verify Issuer is changed successfully + +Let's wait for `RabbitMQOpsRequest` to be `Successful`. Run the following command to watch `RabbitMQOpsRequest` CRO, + +```bash +$ kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-change-issuer ReconfigureTLS Successful 105s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-change-issuer +Name: mops-change-issuer +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2021-03-11T16:27:47Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:tls: + .: + f:issuerRef: + .: + f:apiGroup: + f:kind: + f:name: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-11T16:27:47Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-11T16:27:47Z + Resource Version: 523903 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/RabbitMQopsrequests/mops-change-issuer + UID: cdfe8a7d-52ef-466c-a5dd-97e74ad598ca +Spec: + Database Ref: + Name: mg-rs + Tls: + Issuer Ref: + API Group: cert-manager.io + Kind: Issuer + Name: mg-new-issuer + Type: ReconfigureTLS +Status: + Conditions: + Last Transition Time: 2021-03-11T16:27:47Z + Message: RabbitMQ ops request is reconfiguring TLS + Observed Generation: 1 + Reason: ReconfigureTLS + Status: True + Type: ReconfigureTLS + Last Transition Time: 2021-03-11T16:27:52Z + Message: Successfully Issued New Certificates + Observed Generation: 1 + Reason: CertificateIssuingSuccessful + Status: True + Type: CertificateIssuingSuccessful + Last Transition Time: 2021-03-11T16:29:37Z + Message: Successfully Restarted ReplicaSet nodes + Observed Generation: 1 + Reason: RestartReplicaSet + Status: True + Type: RestartReplicaSet + Last Transition Time: 2021-03-11T16:29:37Z + Message: Successfully Reconfigured TLS + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal CertificateIssuingSuccessful 2m27s KubeDB Ops-manager operator Successfully Issued New Certificates + Normal RestartReplicaSet 42s KubeDB Ops-manager operator Successfully Restarted ReplicaSet nodes + Normal Successful 42s KubeDB Ops-manager operator Successfully Reconfigured TLS +``` + +Now, Let's exec into a database node and find out the ca subject to see if it matches the one we have provided. + +```bash +$ kubectl exec -it mg-rs-2 -n demo bash +root@mgo-rs-tls-2:/$ openssl x509 -in /var/run/RabbitMQ/tls/ca.crt -inform PEM -subject -nameopt RFC2253 -noout +subject=O=kubedb-updated,CN=ca-updated +``` + +We can see from the above output that, the subject name matches the subject name of the new ca certificate that we have created. So, the issuer is changed successfully. + +## Remove TLS from the Database + +Now, we are going to remove TLS from this database using a RabbitMQOpsRequest. + +### Create RabbitMQOpsRequest + +Below is the YAML of the `RabbitMQOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-remove + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: mg-rs + tls: + remove: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `mg-rs` database. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our database. +- `spec.tls.remove` specifies that we want to remove tls from this database. + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/reconfigure-tls/mops-remove.yaml +RabbitMQopsrequest.ops.kubedb.com/mops-remove created +``` + +#### Verify TLS Removed Successfully + +Let's wait for `RabbitMQOpsRequest` to be `Successful`. Run the following command to watch `RabbitMQOpsRequest` CRO, + +```bash +$ kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-remove ReconfigureTLS Successful 105s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-remove +Name: mops-remove +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2021-03-11T16:35:32Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:tls: + .: + f:remove: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-11T16:35:32Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-11T16:35:32Z + Resource Version: 525550 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/RabbitMQopsrequests/mops-remove + UID: 99184cc4-1595-4f0f-b8eb-b65c5d0e86a6 +Spec: + Database Ref: + Name: mg-rs + Tls: + Remove: true + Type: ReconfigureTLS +Status: + Conditions: + Last Transition Time: 2021-03-11T16:35:32Z + Message: RabbitMQ ops request is reconfiguring TLS + Observed Generation: 1 + Reason: ReconfigureTLS + Status: True + Type: ReconfigureTLS + Last Transition Time: 2021-03-11T16:35:37Z + Message: Successfully Updated StatefulSets + Observed Generation: 1 + Reason: TLSRemoved + Status: True + Type: TLSRemoved + Last Transition Time: 2021-03-11T16:37:07Z + Message: Successfully Restarted ReplicaSet nodes + Observed Generation: 1 + Reason: RestartReplicaSet + Status: True + Type: RestartReplicaSet + Last Transition Time: 2021-03-11T16:37:07Z + Message: Successfully Reconfigured TLS + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 2m5s KubeDB Ops-manager operator Pausing RabbitMQ demo/mg-rs + Normal PauseDatabase 2m5s KubeDB Ops-manager operator Successfully paused RabbitMQ demo/mg-rs + Normal TLSRemoved 2m5s KubeDB Ops-manager operator Successfully Updated StatefulSets + Normal RestartReplicaSet 35s KubeDB Ops-manager operator Successfully Restarted ReplicaSet nodes + Normal ResumeDatabase 35s KubeDB Ops-manager operator Resuming RabbitMQ demo/mg-rs + Normal ResumeDatabase 35s KubeDB Ops-manager operator Successfully resumed RabbitMQ demo/mg-rs + Normal Successful 35s KubeDB Ops-manager operator Successfully Reconfigured TLS +``` + +Now, Let's exec into the database primary node and find out that TLS is disabled or not. + +```bash +$ kubectl exec -it -n demo mg-rs-1 -- mongo admin -u root -p 'U6(h_pYrekLZ2OOd' +rs0:PRIMARY> db.adminCommand({ getParameter:1, sslMode:1 }) +{ + "sslMode" : "disabled", + "ok" : 1, + "$clusterTime" : { + "clusterTime" : Timestamp(1615480817, 1), + "signature" : { + "hash" : BinData(0,"CWJngDTQqDhKXyx7WMFJqqUfvhY="), + "keyId" : NumberLong("6938294279689207810") + } + }, + "operationTime" : Timestamp(1615480817, 1) +} +``` + +So, we can see from the above that, output that tls is disabled successfully. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete RabbitMQ -n demo mg-rs +kubectl delete issuer -n demo mg-issuer mg-new-issuer +kubectl delete RabbitMQopsrequest mops-add-tls mops-remove mops-rotate mops-change-issuer +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [RabbitMQ object](/docs/guides/RabbitMQ/concepts/RabbitMQ.md). +- Initialize [RabbitMQ with Script](/docs/guides/RabbitMQ/initialization/using-script.md). +- Monitor your RabbitMQ database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/RabbitMQ/monitoring/using-prometheus-operator.md). +- Monitor your RabbitMQ database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/RabbitMQ/monitoring/using-builtin-prometheus.md). +- Use [private Docker registry](/docs/guides/RabbitMQ/private-registry/using-private-registry.md) to deploy RabbitMQ with KubeDB. +- Use [kubedb cli](/docs/guides/RabbitMQ/cli/cli.md) to manage databases like kubectl for Kubernetes. +- Detail concepts of [RabbitMQ object](/docs/guides/RabbitMQ/concepts/RabbitMQ.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/rabbitmq/reconfigure/_index.md b/docs/guides/rabbitmq/reconfigure/_index.md new file mode 100644 index 0000000000..f37e874950 --- /dev/null +++ b/docs/guides/rabbitmq/reconfigure/_index.md @@ -0,0 +1,10 @@ +--- +title: Reconfigure +menu: + docs_{{ .version }}: + identifier: mg-reconfigure + name: Reconfigure + parent: mg-RabbitMQ-guides + weight: 46 +menu_name: docs_{{ .version }} +--- \ No newline at end of file diff --git a/docs/guides/rabbitmq/reconfigure/overview.md b/docs/guides/rabbitmq/reconfigure/overview.md new file mode 100644 index 0000000000..5d1062a4b7 --- /dev/null +++ b/docs/guides/rabbitmq/reconfigure/overview.md @@ -0,0 +1,54 @@ +--- +title: Reconfiguring RabbitMQ +menu: + docs_{{ .version }}: + identifier: mg-reconfigure-overview + name: Overview + parent: mg-reconfigure + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Reconfiguring RabbitMQ + +This guide will give an overview on how KubeDB Ops-manager operator reconfigures `RabbitMQ` database components such as ReplicaSet, Shard, ConfigServer, Mongos, etc. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + +## How Reconfiguring RabbitMQ Process Works + +The following diagram shows how KubeDB Ops-manager operator reconfigures `RabbitMQ` database components. Open the image in a new tab to see the enlarged version. + +
+  Reconfiguring process of RabbitMQ +
Fig: Reconfiguring process of RabbitMQ
+
+ +The Reconfiguring RabbitMQ process consists of the following steps: + +1. At first, a user creates a `RabbitMQ` Custom Resource (CR). + +2. `KubeDB` Provisioner operator watches the `RabbitMQ` CR. + +3. When the operator finds a `RabbitMQ` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to reconfigure the various components (ie. ReplicaSet, Shard, ConfigServer, Mongos, etc.) of the `RabbitMQ` database the user creates a `RabbitMQOpsRequest` CR with desired information. + +5. `KubeDB` Ops-manager operator watches the `RabbitMQOpsRequest` CR. + +6. When it finds a `RabbitMQOpsRequest` CR, it halts the `RabbitMQ` object which is referred from the `RabbitMQOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `RabbitMQ` object during the reconfiguring process. + +7. Then the `KubeDB` Ops-manager operator will replace the existing configuration with the new configuration provided or merge the new configuration with the existing configuration according to the `MogoDBOpsRequest` CR. + +8. Then the `KubeDB` Ops-manager operator will restart the related StatefulSet Pods so that they restart with the new configuration defined in the `RabbitMQOpsRequest` CR. + +9. After the successful reconfiguring of the `RabbitMQ` components, the `KubeDB` Ops-manager operator resumes the `RabbitMQ` object so that the `KubeDB` Provisioner operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on reconfiguring RabbitMQ database components using `RabbitMQOpsRequest` CRD. \ No newline at end of file diff --git a/docs/guides/rabbitmq/reconfigure/replicaset.md b/docs/guides/rabbitmq/reconfigure/replicaset.md new file mode 100644 index 0000000000..00caf9a6e2 --- /dev/null +++ b/docs/guides/rabbitmq/reconfigure/replicaset.md @@ -0,0 +1,645 @@ +--- +title: Reconfigure RabbitMQ Replicaset +menu: + docs_{{ .version }}: + identifier: mg-reconfigure-replicaset + name: Replicaset + parent: mg-reconfigure + weight: 30 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Reconfigure RabbitMQ Replicaset Database + +This guide will show you how to use `KubeDB` Ops-manager operator to reconfigure a RabbitMQ Replicaset. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [ReplicaSet](/docs/guides/RabbitMQ/clustering/replicaset.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + - [Reconfigure Overview](/docs/guides/RabbitMQ/reconfigure/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](/docs/examples/RabbitMQ) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +Now, we are going to deploy a `RabbitMQ` Replicaset using a supported version by `KubeDB` operator. Then we are going to apply `RabbitMQOpsRequest` to reconfigure its configuration. + +### Prepare RabbitMQ Replicaset + +Now, we are going to deploy a `RabbitMQ` Replicaset database with version `4.4.26`. + +### Deploy RabbitMQ + +At first, we will create `mongod.conf` file containing required configuration settings. + +```ini +$ cat mongod.conf +net: + maxIncomingConnections: 10000 +``` +Here, `maxIncomingConnections` is set to `10000`, whereas the default value is `65536`. + +Now, we will create a secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo mg-custom-config --from-file=./mongod.conf +secret/mg-custom-config created +``` + +In this section, we are going to create a RabbitMQ object specifying `spec.configSecret` field to apply this custom configuration. Below is the YAML of the `RabbitMQ` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mg-replicaset + namespace: demo +spec: + version: "4.4.26" + replicas: 3 + replicaSet: + name: rs0 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + configSecret: + name: mg-custom-config +``` + +Let's create the `RabbitMQ` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/reconfigure/mg-replicaset-config.yaml +RabbitMQ.kubedb.com/mg-replicaset created +``` + +Now, wait until `mg-replicaset` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-replicaset 4.4.26 Ready 19m +``` + +Now, we will check if the database has started with the custom configuration we have provided. + +First we need to get the username and password to connect to a RabbitMQ instance, +```bash +$ kubectl get secrets -n demo mg-replicaset-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo mg-replicaset-auth -o jsonpath='{.data.\password}' | base64 -d +nrKuxni0wDSMrgwy +``` + +Now let's connect to a RabbitMQ instance and run a RabbitMQ internal command to check the configuration we have provided. + +```bash +$ kubectl exec -n demo mg-replicaset-0 -- mongo admin -u root -p nrKuxni0wDSMrgwy --eval "db._adminCommand( {getCmdLineOpts: 1})" --quiet +{ + "argv" : [ + "mongod", + "--dbpath=/data/db", + "--auth", + "--ipv6", + "--bind_ip_all", + "--port=27017", + "--tlsMode=disabled", + "--replSet=rs0", + "--keyFile=/data/configdb/key.txt", + "--clusterAuthMode=keyFile", + "--config=/data/configdb/mongod.conf" + ], + "parsed" : { + "config" : "/data/configdb/mongod.conf", + "net" : { + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 10000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } + }, + "replication" : { + "replSet" : "rs0" + }, + "security" : { + "authorization" : "enabled", + "clusterAuthMode" : "keyFile", + "keyFile" : "/data/configdb/key.txt" + }, + "storage" : { + "dbPath" : "/data/db" + } + }, + "ok" : 1, + "$clusterTime" : { + "clusterTime" : Timestamp(1614668500, 1), + "signature" : { + "hash" : BinData(0,"7sh886HhsNYajGxYGp5Jxi52IzA="), + "keyId" : NumberLong("6934943333319966722") + } + }, + "operationTime" : Timestamp(1614668500, 1) +} +``` + +As we can see from the configuration of ready RabbitMQ, the value of `maxIncomingConnections` has been set to `10000`. + +### Reconfigure using new config secret + +Now we will reconfigure this database to set `maxIncomingConnections` to `20000`. + +Now, we will edit the `mongod.conf` file containing required configuration settings. + +```ini +$ cat mongod.conf +net: + maxIncomingConnections: 20000 +``` + +Then, we will create a new secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo new-custom-config --from-file=./mongod.conf +secret/new-custom-config created +``` + +#### Create RabbitMQOpsRequest + +Now, we will use this secret to replace the previous secret using a `RabbitMQOpsRequest` CR. The `RabbitMQOpsRequest` yaml is given below, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-reconfigure-replicaset + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-replicaset + configuration: + replicaSet: + configSecret: + name: new-custom-config + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are reconfiguring `mops-reconfigure-replicaset` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.customConfig.replicaSet.configSecret.name` specifies the name of the new secret. +- `spec.customConfig.arbiter.configSecret.name` could also be specified with a config-secret. +- Have a look [here](/docs/guides/RabbitMQ/concepts/opsrequest.md#specreadinesscriteria) on the respective sections to understand the `readinessCriteria`, `timeout` & `apply` fields. + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/reconfigure/mops-reconfigure-replicaset.yaml +RabbitMQopsrequest.ops.kubedb.com/mops-reconfigure-replicaset created +``` + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Ops-manager operator will update the `configSecret` of `RabbitMQ` object. + +Let's wait for `RabbitMQOpsRequest` to be `Successful`. Run the following command to watch `RabbitMQOpsRequest` CR, + +```bash +$ watch kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-reconfigure-replicaset Reconfigure Successful 113s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to reconfigure the database. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-reconfigure-replicaset +Name: mops-reconfigure-replicaset +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2021-03-02T07:04:31Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:configuration: + .: + f:replicaSet: + .: + f:configSecret: + .: + f:name: + f:databaseRef: + .: + f:name: + f:readinessCriteria: + .: + f:objectsCountDiffPercentage: + f:oplogMaxLagSeconds: + f:timeout: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-02T07:04:31Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:spec: + f:configuration: + f:replicaSet: + f:podTemplate: + .: + f:controller: + f:metadata: + f:spec: + .: + f:resources: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-02T07:04:31Z + Resource Version: 29869 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/RabbitMQopsrequests/mops-reconfigure-replicaset + UID: 064733d6-19db-4153-82f7-bc0580116ee6 +Spec: + Apply: IfReady + Configuration: + Replica Set: + Config Secret: + Name: new-custom-config + Database Ref: + Name: mg-replicaset + Readiness Criteria: + Objects Count Diff Percentage: 10 + Oplog Max Lag Seconds: 20 + Timeout: 5m + Type: Reconfigure +Status: + Conditions: + Last Transition Time: 2021-03-02T07:04:31Z + Message: RabbitMQ ops request is reconfiguring database + Observed Generation: 1 + Reason: Reconfigure + Status: True + Type: Reconfigure + Last Transition Time: 2021-03-02T07:06:21Z + Message: Successfully Reconfigured RabbitMQ + Observed Generation: 1 + Reason: ReconfigureReplicaset + Status: True + Type: ReconfigureReplicaset + Last Transition Time: 2021-03-02T07:06:21Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 2m55s KubeDB Ops-manager operator Pausing RabbitMQ demo/mg-replicaset + Normal PauseDatabase 2m55s KubeDB Ops-manager operator Successfully paused RabbitMQ demo/mg-replicaset + Normal ReconfigureReplicaset 65s KubeDB Ops-manager operator Successfully Reconfigured RabbitMQ + Normal ResumeDatabase 65s KubeDB Ops-manager operator Resuming RabbitMQ demo/mg-replicaset + Normal ResumeDatabase 65s KubeDB Ops-manager operator Successfully resumed RabbitMQ demo/mg-replicaset + Normal Successful 65s KubeDB Ops-manager operator Successfully Reconfigured Database +``` + +Now let's connect to a RabbitMQ instance and run a RabbitMQ internal command to check the new configuration we have provided. + +```bash +$ kubectl exec -n demo mg-replicaset-0 -- mongo admin -u root -p nrKuxni0wDSMrgwy --eval "db._adminCommand( {getCmdLineOpts: 1})" --quiet +{ + "argv" : [ + "mongod", + "--dbpath=/data/db", + "--auth", + "--ipv6", + "--bind_ip_all", + "--port=27017", + "--tlsMode=disabled", + "--replSet=rs0", + "--keyFile=/data/configdb/key.txt", + "--clusterAuthMode=keyFile", + "--config=/data/configdb/mongod.conf" + ], + "parsed" : { + "config" : "/data/configdb/mongod.conf", + "net" : { + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 20000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } + }, + "replication" : { + "replSet" : "rs0" + }, + "security" : { + "authorization" : "enabled", + "clusterAuthMode" : "keyFile", + "keyFile" : "/data/configdb/key.txt" + }, + "storage" : { + "dbPath" : "/data/db" + } + }, + "ok" : 1, + "$clusterTime" : { + "clusterTime" : Timestamp(1614668887, 1), + "signature" : { + "hash" : BinData(0,"5q35Y51+YpbVHFKoaU7lUWi38oY="), + "keyId" : NumberLong("6934943333319966722") + } + }, + "operationTime" : Timestamp(1614668887, 1) +} +``` + +As we can see from the configuration of ready RabbitMQ, the value of `maxIncomingConnections` has been changed from `10000` to `20000`. So the reconfiguration of the database is successful. + + +### Reconfigure using apply config + +Now we will reconfigure this database again to set `maxIncomingConnections` to `30000`. This time we won't use a new secret. We will use the `applyConfig` field of the `RabbitMQOpsRequest`. This will merge the new config in the existing secret. + +#### Create RabbitMQOpsRequest + +Now, we will use the new configuration in the `applyConfig` field in the `RabbitMQOpsRequest` CR. The `RabbitMQOpsRequest` yaml is given below, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-reconfigure-apply-replicaset + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-replicaset + configuration: + replicaSet: + applyConfig: + mongod.conf: |- + net: + maxIncomingConnections: 30000 + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are reconfiguring `mops-reconfigure-apply-replicaset` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.configuration.replicaSet.applyConfig` specifies the new configuration that will be merged in the existing secret. +- `spec.customConfig.arbiter.configSecret.name` could also be specified with a config-secret. +- Have a look [here](/docs/guides/RabbitMQ/concepts/opsrequest.md#specreadinesscriteria) on the respective sections to understand the `readinessCriteria`, `timeout` & `apply` fields. + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/reconfigure/mops-reconfigure-apply-replicaset.yaml +RabbitMQopsrequest.ops.kubedb.com/mops-reconfigure-apply-replicaset created +``` + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Ops-manager operator will merge this new config with the existing configuration. + +Let's wait for `RabbitMQOpsRequest` to be `Successful`. Run the following command to watch `RabbitMQOpsRequest` CR, + +```bash +$ watch kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-reconfigure-apply-replicaset Reconfigure Successful 109s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to reconfigure the database. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-reconfigure-apply-replicaset +Name: mops-reconfigure-apply-replicaset +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2021-03-02T07:09:39Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:configuration: + .: + f:replicaSet: + .: + f:applyConfig: + f:databaseRef: + .: + f:name: + f:readinessCriteria: + .: + f:objectsCountDiffPercentage: + f:oplogMaxLagSeconds: + f:timeout: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-02T07:09:39Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:spec: + f:configuration: + f:replicaSet: + f:podTemplate: + .: + f:controller: + f:metadata: + f:spec: + .: + f:resources: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-02T07:09:39Z + Resource Version: 31005 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/RabbitMQopsrequests/mops-reconfigure-apply-replicaset + UID: 0137442b-1b04-43ed-8de7-ecd913b44065 +Spec: + Apply: IfReady + Configuration: + Replica Set: + Apply Config: net: + maxIncomingConnections: 30000 + + Database Ref: + Name: mg-replicaset + Readiness Criteria: + Objects Count Diff Percentage: 10 + Oplog Max Lag Seconds: 20 + Timeout: 5m + Type: Reconfigure +Status: + Conditions: + Last Transition Time: 2021-03-02T07:09:39Z + Message: RabbitMQ ops request is reconfiguring database + Observed Generation: 1 + Reason: Reconfigure + Status: True + Type: Reconfigure + Last Transition Time: 2021-03-02T07:11:14Z + Message: Successfully Reconfigured RabbitMQ + Observed Generation: 1 + Reason: ReconfigureReplicaset + Status: True + Type: ReconfigureReplicaset + Last Transition Time: 2021-03-02T07:11:14Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 9m20s KubeDB Ops-manager operator Pausing RabbitMQ demo/mg-replicaset + Normal PauseDatabase 9m20s KubeDB Ops-manager operator Successfully paused RabbitMQ demo/mg-replicaset + Normal ReconfigureReplicaset 7m45s KubeDB Ops-manager operator Successfully Reconfigured RabbitMQ + Normal ResumeDatabase 7m45s KubeDB Ops-manager operator Resuming RabbitMQ demo/mg-replicaset + Normal ResumeDatabase 7m45s KubeDB Ops-manager operator Successfully resumed RabbitMQ demo/mg-replicaset + Normal Successful 7m45s KubeDB Ops-manager operator Successfully Reconfigured Database +``` + +Now let's connect to a RabbitMQ instance and run a RabbitMQ internal command to check the new configuration we have provided. + +```bash +$ kubectl exec -n demo mg-replicaset-0 -- mongo admin -u root -p nrKuxni0wDSMrgwy --eval "db._adminCommand( {getCmdLineOpts: 1})" --quiet +{ + "argv" : [ + "mongod", + "--dbpath=/data/db", + "--auth", + "--ipv6", + "--bind_ip_all", + "--port=27017", + "--tlsMode=disabled", + "--replSet=rs0", + "--keyFile=/data/configdb/key.txt", + "--clusterAuthMode=keyFile", + "--config=/data/configdb/mongod.conf" + ], + "parsed" : { + "config" : "/data/configdb/mongod.conf", + "net" : { + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 30000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } + }, + "replication" : { + "replSet" : "rs0" + }, + "security" : { + "authorization" : "enabled", + "clusterAuthMode" : "keyFile", + "keyFile" : "/data/configdb/key.txt" + }, + "storage" : { + "dbPath" : "/data/db" + } + }, + "ok" : 1, + "$clusterTime" : { + "clusterTime" : Timestamp(1614669580, 1), + "signature" : { + "hash" : BinData(0,"u/xTAa4aW/8bsRvBYPffwQCeTF0="), + "keyId" : NumberLong("6934943333319966722") + } + }, + "operationTime" : Timestamp(1614669580, 1) +} +``` + +As we can see from the configuration of ready RabbitMQ, the value of `maxIncomingConnections` has been changed from `20000` to `30000`. So the reconfiguration of the database using the `applyConfig` field is successful. + + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-replicaset +kubectl delete RabbitMQopsrequest -n demo mops-reconfigure-replicaset mops-reconfigure-apply-replicaset +``` \ No newline at end of file diff --git a/docs/guides/rabbitmq/reconfigure/sharding.md b/docs/guides/rabbitmq/reconfigure/sharding.md new file mode 100644 index 0000000000..bf973afdec --- /dev/null +++ b/docs/guides/rabbitmq/reconfigure/sharding.md @@ -0,0 +1,571 @@ +--- +title: Reconfigure RabbitMQ Sharded Cluster +menu: + docs_{{ .version }}: + identifier: mg-reconfigure-shard + name: Sharding + parent: mg-reconfigure + weight: 40 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Reconfigure RabbitMQ Shard + +This guide will show you how to use `KubeDB` Ops-manager operator to reconfigure a RabbitMQ shard. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [Sharding](/docs/guides/RabbitMQ/clustering/sharding.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + - [Reconfigure Overview](/docs/guides/RabbitMQ/reconfigure/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](/docs/examples/RabbitMQ) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +Now, we are going to deploy a `RabbitMQ` sharded database using a supported version by `KubeDB` operator. Then we are going to apply `RabbitMQOpsRequest` to reconfigure its configuration. + +### Prepare RabbitMQ Shard + +Now, we are going to deploy a `RabbitMQ` sharded database with version `4.4.26`. + +### Deploy RabbitMQ database + +At first, we will create `mongod.conf` file containing required configuration settings. + +```ini +$ cat mongod.conf +net: + maxIncomingConnections: 10000 +``` +Here, `maxIncomingConnections` is set to `10000`, whereas the default value is `65536`. + +Now, we will create a secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo mg-custom-config --from-file=./mongod.conf +secret/mg-custom-config created +``` + +In this section, we are going to create a RabbitMQ object specifying `spec.configSecret` field to apply this custom configuration. Below is the YAML of the `RabbitMQ` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mg-sharding + namespace: demo +spec: + version: 4.4.26 + shardTopology: + configServer: + replicas: 3 + configSecret: + name: mg-custom-config + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + replicas: 2 + configSecret: + name: mg-custom-config + shard: + replicas: 3 + shards: 2 + configSecret: + name: mg-custom-config + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard +``` + +Let's create the `RabbitMQ` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/reconfigure/mg-shard-config.yaml +RabbitMQ.kubedb.com/mg-sharding created +``` + +Now, wait until `mg-sharding` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-sharding 4.4.26 Ready 3m23s +``` + +Now, we will check if the database has started with the custom configuration we have provided. + +First we need to get the username and password to connect to a RabbitMQ instance, +```bash +$ kubectl get secrets -n demo mg-sharding-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo mg-sharding-auth -o jsonpath='{.data.\password}' | base64 -d +Dv8F55zVNiEkhHM6 +``` + +Now let's connect to a RabbitMQ instance from each type of nodes and run a RabbitMQ internal command to check the configuration we have provided. + +```bash +$ kubectl exec -n demo mg-sharding-mongos-0 -- mongo admin -u root -p Dv8F55zVNiEkhHM6 --eval "db._adminCommand( {getCmdLineOpts: 1}).parsed.net" --quiet +{ + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 10000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } +} + +$ kubectl exec -n demo mg-sharding-configsvr-0 -- mongo admin -u root -p Dv8F55zVNiEkhHM6 --eval "db._adminCommand( {getCmdLineOpts: 1}).parsed.net" --quiet +{ + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 10000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } +} + +$ kubectl exec -n demo mg-sharding-shard0-0 -- mongo admin -u root -p Dv8F55zVNiEkhHM6 --eval "db._adminCommand( {getCmdLineOpts: 1}).parsed.net" --quiet +{ + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 10000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } +} +``` + +As we can see from the configuration of ready RabbitMQ, the value of `maxIncomingConnections` has been set to `10000` in all nodes. + +### Reconfigure using new secret + +Now we will reconfigure this database to set `maxIncomingConnections` to `20000`. + +Now, we will edit the `mongod.conf` file containing required configuration settings. + +```ini +$ cat mongod.conf +net: + maxIncomingConnections: 20000 +``` + +Then, we will create a new secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo new-custom-config --from-file=./mongod.conf +secret/new-custom-config created +``` + +#### Create RabbitMQOpsRequest + +Now, we will use this secret to replace the previous secret using a `RabbitMQOpsRequest` CR. The `RabbitMQOpsRequest` yaml is given below, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-reconfigure-shard + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-sharding + configuration: + shard: + configSecret: + name: new-custom-config + configServer: + configSecret: + name: new-custom-config + mongos: + configSecret: + name: new-custom-config + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are reconfiguring `mops-reconfigure-shard` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.configuration.shard.configSecret.name` specifies the name of the new secret for shard nodes. +- `spec.configuration.configServer.configSecret.name` specifies the name of the new secret for configServer nodes. +- `spec.configuration.mongos.configSecret.name` specifies the name of the new secret for mongos nodes. +- `spec.customConfig.arbiter.configSecret.name` could also be specified with a config-secret. +- Have a look [here](/docs/guides/RabbitMQ/concepts/opsrequest.md#specreadinesscriteria) on the respective sections to understand the `readinessCriteria`, `timeout` & `apply` fields. + +> **Note:** If you don't want to reconfigure all the components together, you can only specify the components (shard, configServer and mongos) that you want to reconfigure. + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/reconfigure/mops-reconfigure-shard.yaml +RabbitMQopsrequest.ops.kubedb.com/mops-reconfigure-shard created +``` + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Ops-manager operator will update the `configSecret` of `RabbitMQ` object. + +Let's wait for `RabbitMQOpsRequest` to be `Successful`. Run the following command to watch `RabbitMQOpsRequest` CR, + +```bash +$ watch kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-reconfigure-shard Reconfigure Successful 3m8s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to reconfigure the database. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-reconfigure-shard + +``` + +Now let's connect to a RabbitMQ instance from each type of nodes and run a RabbitMQ internal command to check the new configuration we have provided. + +```bash +$ kubectl exec -n demo mg-sharding-mongos-0 -- mongo admin -u root -p Dv8F55zVNiEkhHM6 --eval "db._adminCommand( {getCmdLineOpts: 1}).parsed.net" --quiet + { + "bindIp" : "0.0.0.0", + "maxIncomingConnections" : 20000, + "port" : 27017, + "ssl" : { + "mode" : "disabled" + } + } + +$ kubectl exec -n demo mg-sharding-configsvr-0 -- mongo admin -u root -p Dv8F55zVNiEkhHM6 --eval "db._adminCommand( {getCmdLineOpts: 1}).parsed.net" --quiet + { + "bindIp" : "0.0.0.0", + "maxIncomingConnections" : 20000, + "port" : 27017, + "ssl" : { + "mode" : "disabled" + } + } + +$ kubectl exec -n demo mg-sharding-shard0-0 -- mongo admin -u root -p Dv8F55zVNiEkhHM6 --eval "db._adminCommand( {getCmdLineOpts: 1}).parsed.net" --quiet + { + "bindIp" : "0.0.0.0", + "maxIncomingConnections" : 20000, + "port" : 27017, + "ssl" : { + "mode" : "disabled" + } + } +``` + +As we can see from the configuration of ready RabbitMQ, the value of `maxIncomingConnections` has been changed from `10000` to `20000` in all type of nodes. So the reconfiguration of the database is successful. + +### Reconfigure using apply config + +Now we will reconfigure this database again to set `maxIncomingConnections` to `30000`. This time we won't use a new secret. We will use the `applyConfig` field of the `RabbitMQOpsRequest`. This will merge the new config in the existing secret. + +#### Create RabbitMQOpsRequest + +Now, we will use the new configuration in the `data` field in the `RabbitMQOpsRequest` CR. The `RabbitMQOpsRequest` yaml is given below, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-reconfigure-apply-shard + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-sharding + configuration: + shard: + applyConfig: + mongod.conf: |- + net: + maxIncomingConnections: 30000 + configServer: + applyConfig: + mongod.conf: |- + net: + maxIncomingConnections: 30000 + mongos: + applyConfig: + mongod.conf: |- + net: + maxIncomingConnections: 30000 + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are reconfiguring `mops-reconfigure-apply-shard` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.configuration.shard.applyConfig` specifies the new configuration that will be merged in the existing secret for shard nodes. +- `spec.configuration.configServer.applyConfig` specifies the new configuration that will be merged in the existing secret for configServer nodes. +- `spec.configuration.mongos.applyConfig` specifies the new configuration that will be merged in the existing secret for mongos nodes. +- `spec.customConfig.arbiter.configSecret.name` could also be specified with a config-secret. +- Have a look [here](/docs/guides/RabbitMQ/concepts/opsrequest.md#specreadinesscriteria) on the respective sections to understand the `readinessCriteria`, `timeout` & `apply` fields. + +> **Note:** If you don't want to reconfigure all the components together, you can only specify the components (shard, configServer and mongos) that you want to reconfigure. + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/reconfigure/mops-reconfigure-apply-shard.yaml +RabbitMQopsrequest.ops.kubedb.com/mops-reconfigure-apply-shard created +``` + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Ops-manager operator will merge this new config with the existing configuration. + +Let's wait for `RabbitMQOpsRequest` to be `Successful`. Run the following command to watch `RabbitMQOpsRequest` CR, + +```bash +$ watch kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-reconfigure-apply-shard Reconfigure Successful 3m24s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to reconfigure the database. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-reconfigure-apply-shard +Name: mops-reconfigure-apply-shard +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2021-03-02T13:08:25Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:configuration: + .: + f:configServer: + .: + f:configSecret: + .: + f:name: + f:mongos: + .: + f:configSecret: + .: + f:name: + f:shard: + .: + f:configSecret: + .: + f:name: + f:databaseRef: + .: + f:name: + f:readinessCriteria: + .: + f:objectsCountDiffPercentage: + f:oplogMaxLagSeconds: + f:timeout: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-02T13:08:25Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:spec: + f:configuration: + f:configServer: + f:podTemplate: + .: + f:controller: + f:metadata: + f:spec: + .: + f:resources: + f:mongos: + f:podTemplate: + .: + f:controller: + f:metadata: + f:spec: + .: + f:resources: + f:shard: + f:podTemplate: + .: + f:controller: + f:metadata: + f:spec: + .: + f:resources: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-02T13:08:25Z + Resource Version: 103635 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/RabbitMQopsrequests/mops-reconfigure-apply-shard + UID: ab454bcb-164c-4fa2-9eaa-dd47c60fe874 +Spec: + Apply: IfReady + Configuration: + Config Server: + Apply Config: net: + maxIncomingConnections: 30000 + + Mongos: + Apply Config: net: + maxIncomingConnections: 30000 + + Shard: + Apply Config: net: + maxIncomingConnections: 30000 + + Database Ref: + Name: mg-sharding + Readiness Criteria: + Objects Count Diff Percentage: 10 + Oplog Max Lag Seconds: 20 + Timeout: 5m + Type: Reconfigure +Status: + Conditions: + Last Transition Time: 2021-03-02T13:08:25Z + Message: RabbitMQ ops request is reconfiguring database + Observed Generation: 1 + Reason: Reconfigure + Status: True + Type: Reconfigure + Last Transition Time: 2021-03-02T13:10:10Z + Message: Successfully Reconfigured RabbitMQ + Observed Generation: 1 + Reason: ReconfigureConfigServer + Status: True + Type: ReconfigureConfigServer + Last Transition Time: 2021-03-02T13:13:15Z + Message: Successfully Reconfigured RabbitMQ + Observed Generation: 1 + Reason: ReconfigureShard + Status: True + Type: ReconfigureShard + Last Transition Time: 2021-03-02T13:14:10Z + Message: Successfully Reconfigured RabbitMQ + Observed Generation: 1 + Reason: ReconfigureMongos + Status: True + Type: ReconfigureMongos + Last Transition Time: 2021-03-02T13:14:10Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 13m KubeDB Ops-manager operator Pausing RabbitMQ demo/mg-sharding + Normal PauseDatabase 13m KubeDB Ops-manager operator Successfully paused RabbitMQ demo/mg-sharding + Normal ReconfigureConfigServer 12m KubeDB Ops-manager operator Successfully Reconfigured RabbitMQ + Normal ReconfigureShard 9m7s KubeDB Ops-manager operator Successfully Reconfigured RabbitMQ + Normal ReconfigureMongos 8m12s KubeDB Ops-manager operator Successfully Reconfigured RabbitMQ + Normal ResumeDatabase 8m12s KubeDB Ops-manager operator Resuming RabbitMQ demo/mg-sharding + Normal ResumeDatabase 8m12s KubeDB Ops-manager operator Successfully resumed RabbitMQ demo/mg-sharding + Normal Successful 8m12s KubeDB Ops-manager operator Successfully Reconfigured Database +``` + +Now let's connect to a RabbitMQ instance from each type of nodes and run a RabbitMQ internal command to check the new configuration we have provided. + +```bash +$ kubectl exec -n demo mg-sharding-mongos-0 -- mongo admin -u root -p Dv8F55zVNiEkhHM6 --eval "db._adminCommand( {getCmdLineOpts: 1}).parsed.net" --quiet +{ + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 20000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } +} + +$ kubectl exec -n demo mg-sharding-configsvr-0 -- mongo admin -u root -p Dv8F55zVNiEkhHM6 --eval "db._adminCommand( {getCmdLineOpts: 1}).parsed.net" --quiet +{ + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 20000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } +} + +$ kubectl exec -n demo mg-sharding-shard0-0 -- mongo admin -u root -p Dv8F55zVNiEkhHM6 --eval "db._adminCommand( {getCmdLineOpts: 1}).parsed.net" --quiet +{ + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 20000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } +} +``` + +As we can see from the configuration of ready RabbitMQ, the value of `maxIncomingConnections` has been changed from `20000` to `30000` in all nodes. So the reconfiguration of the database using the data field is successful. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-sharding +kubectl delete RabbitMQopsrequest -n demo mops-reconfigure-shard mops-reconfigure-apply-shard +``` \ No newline at end of file diff --git a/docs/guides/rabbitmq/reconfigure/standalone.md b/docs/guides/rabbitmq/reconfigure/standalone.md new file mode 100644 index 0000000000..14c9291a9a --- /dev/null +++ b/docs/guides/rabbitmq/reconfigure/standalone.md @@ -0,0 +1,590 @@ +--- +title: Reconfigure Standalone RabbitMQ Database +menu: + docs_{{ .version }}: + identifier: mg-reconfigure-standalone + name: Standalone + parent: mg-reconfigure + weight: 20 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Reconfigure RabbitMQ Standalone Database + +This guide will show you how to use `KubeDB` Ops-manager operator to reconfigure a RabbitMQ standalone database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + - [Reconfigure Overview](/docs/guides/RabbitMQ/reconfigure/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](/docs/examples/RabbitMQ) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +Now, we are going to deploy a `RabbitMQ` standalone using a supported version by `KubeDB` operator. Then we are going to apply `RabbitMQOpsRequest` to reconfigure its configuration. + +### Prepare RabbitMQ Standalone Database + +Now, we are going to deploy a `RabbitMQ` standalone database with version `4.4.26`. + +### Deploy RabbitMQ standalone + +At first, we will create `mongod.conf` file containing required configuration settings. + +```ini +$ cat mongod.conf +net: + maxIncomingConnections: 10000 +``` +Here, `maxIncomingConnections` is set to `10000`, whereas the default value is `65536`. + +Now, we will create a secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo mg-custom-config --from-file=./mongod.conf +secret/mg-custom-config created +``` + +In this section, we are going to create a RabbitMQ object specifying `spec.configSecret` field to apply this custom configuration. Below is the YAML of the `RabbitMQ` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mg-standalone + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + configSecret: + name: mg-custom-config +``` + +Let's create the `RabbitMQ` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/reconfigure/mg-standalone-config.yaml +RabbitMQ.kubedb.com/mg-standalone created +``` + +Now, wait until `mg-standalone` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-standalone 4.4.26 Ready 23s +``` + +Now, we will check if the database has started with the custom configuration we have provided. + +First we need to get the username and password to connect to a RabbitMQ instance, +```bash +$ kubectl get secrets -n demo mg-standalone-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo mg-standalone-auth -o jsonpath='{.data.\password}' | base64 -d +m6lXjZugrC4VEpB8 +``` + +Now let's connect to a RabbitMQ instance and run a RabbitMQ internal command to check the configuration we have provided. + +```bash +$ kubectl exec -n demo mg-standalone-0 -- mongo admin -u root -p m6lXjZugrC4VEpB8 --eval "db._adminCommand( {getCmdLineOpts: 1})" --quiet +{ + "argv" : [ + "mongod", + "--dbpath=/data/db", + "--auth", + "--ipv6", + "--bind_ip_all", + "--port=27017", + "--tlsMode=disabled", + "--config=/data/configdb/mongod.conf" + ], + "parsed" : { + "config" : "/data/configdb/mongod.conf", + "net" : { + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 10000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } + }, + "security" : { + "authorization" : "enabled" + }, + "storage" : { + "dbPath" : "/data/db" + } + }, + "ok" : 1 +} +``` + +As we can see from the configuration of running RabbitMQ, the value of `maxIncomingConnections` has been set to `10000`. + +### Reconfigure using new secret + +Now we will reconfigure this database to set `maxIncomingConnections` to `20000`. + +Now, we will edit the `mongod.conf` file containing required configuration settings. + +```ini +$ cat mongod.conf +net: + maxIncomingConnections: 20000 +``` + +Then, we will create a new secret with this configuration file. + +```bash +$ kubectl create secret generic -n demo new-custom-config --from-file=./mongod.conf +secret/new-custom-config created +``` + +#### Create RabbitMQOpsRequest + +Now, we will use this secret to replace the previous secret using a `RabbitMQOpsRequest` CR. The `RabbitMQOpsRequest` yaml is given below, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-reconfigure-standalone + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-standalone + configuration: + standalone: + configSecret: + name: new-custom-config + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are reconfiguring `mops-reconfigure-standalone` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.configuration.standalone.configSecret.name` specifies the name of the new secret. +- Have a look [here](/docs/guides/RabbitMQ/concepts/opsrequest.md#specreadinesscriteria) on the respective sections to understand the `readinessCriteria`, `timeout` & `apply` fields. + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/reconfigure/mops-reconfigure-standalone.yaml +RabbitMQopsrequest.ops.kubedb.com/mops-reconfigure-standalone created +``` + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Ops-manager operator will update the `configSecret` of `RabbitMQ` object. + +Let's wait for `RabbitMQOpsRequest` to be `Successful`. Run the following command to watch `RabbitMQOpsRequest` CR, + +```bash +$ watch kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-reconfigure-standalone Reconfigure Successful 10m +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to reconfigure the database. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-reconfigure-standalone +Name: mops-reconfigure-standalone +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2021-03-02T15:04:45Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:configuration: + .: + f:standalone: + .: + f:configSecret: + .: + f:name: + f:databaseRef: + .: + f:name: + f:readinessCriteria: + .: + f:objectsCountDiffPercentage: + f:oplogMaxLagSeconds: + f:timeout: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-02T15:04:45Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:spec: + f:configuration: + f:standalone: + f:podTemplate: + .: + f:controller: + f:metadata: + f:spec: + .: + f:resources: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-02T15:04:45Z + Resource Version: 125826 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/RabbitMQopsrequests/mops-reconfigure-standalone + UID: f63bb606-9df5-4516-9901-97dfe5b46b15 +Spec: + Apply: IfReady + Configuration: + Standalone: + Config Secret: + Name: new-custom-config + Database Ref: + Name: mg-standalone + Readiness Criteria: + Objects Count Diff Percentage: 10 + Oplog Max Lag Seconds: 20 + Timeout: 5m + Type: Reconfigure +Status: + Conditions: + Last Transition Time: 2021-03-02T15:04:45Z + Message: RabbitMQ ops request is reconfiguring database + Observed Generation: 1 + Reason: Reconfigure + Status: True + Type: Reconfigure + Last Transition Time: 2021-03-02T15:05:10Z + Message: Successfully Reconfigured RabbitMQ + Observed Generation: 1 + Reason: ReconfigureStandalone + Status: True + Type: ReconfigureStandalone + Last Transition Time: 2021-03-02T15:05:10Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 60s KubeDB Ops-manager operator Pausing RabbitMQ demo/mg-standalone + Normal PauseDatabase 60s KubeDB Ops-manager operator Successfully paused RabbitMQ demo/mg-standalone + Normal ReconfigureStandalone 35s KubeDB Ops-manager operator Successfully Reconfigured RabbitMQ + Normal ResumeDatabase 35s KubeDB Ops-manager operator Resuming RabbitMQ demo/mg-standalone + Normal ResumeDatabase 35s KubeDB Ops-manager operator Successfully resumed RabbitMQ demo/mg-standalone + Normal Successful 35s KubeDB Ops-manager operator Successfully Reconfigured Database +``` + +Now let's connect to a RabbitMQ instance and run a RabbitMQ internal command to check the new configuration we have provided. + +```bash +$ kubectl exec -n demo mg-standalone-0 -- mongo admin -u root -p m6lXjZugrC4VEpB8 --eval "db._adminCommand( {getCmdLineOpts: 1})" --quiet +{ + "argv" : [ + "mongod", + "--dbpath=/data/db", + "--auth", + "--ipv6", + "--bind_ip_all", + "--port=27017", + "--tlsMode=disabled", + "--config=/data/configdb/mongod.conf" + ], + "parsed" : { + "config" : "/data/configdb/mongod.conf", + "net" : { + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 20000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } + }, + "security" : { + "authorization" : "enabled" + }, + "storage" : { + "dbPath" : "/data/db" + } + }, + "ok" : 1 +} +``` + +As we can see from the configuration of running RabbitMQ, the value of `maxIncomingConnections` has been changed from `10000` to `20000`. So the reconfiguration of the database is successful. + + +### Reconfigure using apply config + +Now we will reconfigure this database again to set `maxIncomingConnections` to `30000`. This time we won't use a new secret. We will use the `applyConfig` field of the `RabbitMQOpsRequest`. This will merge the new config in the existing secret. + +#### Create RabbitMQOpsRequest + +Now, we will use the new configuration in the `data` field in the `RabbitMQOpsRequest` CR. The `RabbitMQOpsRequest` yaml is given below, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-reconfigure-apply-standalone + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: mg-standalone + configuration: + standalone: + applyConfig: + mongod.conf: |- + net: + maxIncomingConnections: 30000 + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are reconfiguring `mops-reconfigure-apply-standalone` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.configuration.standalone.applyConfig` specifies the new configuration that will be merged in the existing secret. + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/reconfigure/mops-reconfigure-apply-standalone.yaml +RabbitMQopsrequest.ops.kubedb.com/mops-reconfigure-apply-standalone created +``` + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Ops-manager operator will merge this new config with the existing configuration. + +Let's wait for `RabbitMQOpsRequest` to be `Successful`. Run the following command to watch `RabbitMQOpsRequest` CR, + +```bash +$ watch kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-reconfigure-apply-standalone Reconfigure Successful 38s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to reconfigure the database. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-reconfigure-apply-standalone +Name: mops-reconfigure-apply-standalone +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2021-03-02T15:09:12Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:configuration: + .: + f:standalone: + .: + f:applyConfig: + f:databaseRef: + .: + f:name: + f:readinessCriteria: + .: + f:objectsCountDiffPercentage: + f:oplogMaxLagSeconds: + f:timeout: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-02T15:09:12Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:spec: + f:configuration: + f:standalone: + f:podTemplate: + .: + f:controller: + f:metadata: + f:spec: + .: + f:resources: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-02T15:09:13Z + Resource Version: 126782 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/RabbitMQopsrequests/mops-reconfigure-apply-standalone + UID: 33eea32f-e2af-4e36-b612-c528549e3d65 +Spec: + Apply: IfReady + Configuration: + Standalone: + Apply Config: net: + maxIncomingConnections: 30000 + + Database Ref: + Name: mg-standalone + Readiness Criteria: + Objects Count Diff Percentage: 10 + Oplog Max Lag Seconds: 20 + Timeout: 5m + Type: Reconfigure +Status: + Conditions: + Last Transition Time: 2021-03-02T15:09:13Z + Message: RabbitMQ ops request is reconfiguring database + Observed Generation: 1 + Reason: Reconfigure + Status: True + Type: Reconfigure + Last Transition Time: 2021-03-02T15:09:38Z + Message: Successfully Reconfigured RabbitMQ + Observed Generation: 1 + Reason: ReconfigureStandalone + Status: True + Type: ReconfigureStandalone + Last Transition Time: 2021-03-02T15:09:38Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 118s KubeDB Ops-manager operator Pausing RabbitMQ demo/mg-standalone + Normal PauseDatabase 118s KubeDB Ops-manager operator Successfully paused RabbitMQ demo/mg-standalone + Normal ReconfigureStandalone 93s KubeDB Ops-manager operator Successfully Reconfigured RabbitMQ + Normal ResumeDatabase 93s KubeDB Ops-manager operator Resuming RabbitMQ demo/mg-standalone + Normal ResumeDatabase 93s KubeDB Ops-manager operator Successfully resumed RabbitMQ demo/mg-standalone + Normal Successful 93s KubeDB Ops-manager operator Successfully Reconfigured Database +``` + +Now let's connect to a RabbitMQ instance and run a RabbitMQ internal command to check the new configuration we have provided. + +```bash +$ kubectl exec -n demo mg-standalone-0 -- mongo admin -u root -p m6lXjZugrC4VEpB8 --eval "db._adminCommand( {getCmdLineOpts: 1})" --quiet +{ + "argv" : [ + "mongod", + "--dbpath=/data/db", + "--auth", + "--ipv6", + "--bind_ip_all", + "--port=27017", + "--tlsMode=disabled", + "--config=/data/configdb/mongod.conf" + ], + "parsed" : { + "config" : "/data/configdb/mongod.conf", + "net" : { + "bindIp" : "*", + "ipv6" : true, + "maxIncomingConnections" : 30000, + "port" : 27017, + "tls" : { + "mode" : "disabled" + } + }, + "security" : { + "authorization" : "enabled" + }, + "storage" : { + "dbPath" : "/data/db" + } + }, + "ok" : 1 +} +``` + +As we can see from the configuration of running RabbitMQ, the value of `maxIncomingConnections` has been changed from `20000` to `30000`. So the reconfiguration of the database using the `applyConfig` field is successful. + + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-standalone +kubectl delete RabbitMQopsrequest -n demo mops-reconfigure-standalone mops-reconfigure-apply-standalone +``` \ No newline at end of file diff --git a/docs/guides/rabbitmq/reprovision/_index.md b/docs/guides/rabbitmq/reprovision/_index.md new file mode 100644 index 0000000000..a04125c1b4 --- /dev/null +++ b/docs/guides/rabbitmq/reprovision/_index.md @@ -0,0 +1,10 @@ +--- +title: Reprovision RabbitMQ +menu: + docs_{{ .version }}: + identifier: mg-reprovision + name: Reprovision + parent: mg-RabbitMQ-guides + weight: 46 +menu_name: docs_{{ .version }} +--- diff --git a/docs/guides/rabbitmq/reprovision/reprovision.md b/docs/guides/rabbitmq/reprovision/reprovision.md new file mode 100644 index 0000000000..6f8881bff8 --- /dev/null +++ b/docs/guides/rabbitmq/reprovision/reprovision.md @@ -0,0 +1,200 @@ +--- +title: Reprovision RabbitMQ +menu: + docs_{{ .version }}: + identifier: mg-reprovision-details + name: Reprovision RabbitMQ + parent: mg-reprovision + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Reprovision RabbitMQ + +KubeDB supports reprovisioning the RabbitMQ database via a RabbitMQOpsRequest. Reprovisioning is useful if you want, for some reason, to deploy a new RabbitMQ with the same specifications. This tutorial will show you how to use that. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash + $ kubectl create ns demo + namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/RabbitMQ) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy RabbitMQ + +In this section, we are going to deploy a RabbitMQ database using KubeDB. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mongo + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "replicaset" + podTemplate: + spec: + resources: + requests: + cpu: "300m" + memory: "300Mi" + replicas: 2 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut + arbiter: {} + hidden: + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 2Gi +``` + +- `spec.replicaSet` represents the configuration for replicaset. + - `name` denotes the name of RabbitMQ replicaset. +- `spec.replicas` denotes the number of general members in `rs0` RabbitMQ replicaset. +- `spec.podTemplate` denotes specifications of all the 3 general replicaset members. +- `spec.ephemeralStorage` holds the emptyDir volume specifications. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. So, each members will have a pod of this ephemeral storage configuration. +- `spec.arbiter` denotes arbiter-node spec of the deployed RabbitMQ CRD. +- `spec.hidden` denotes hidden-node spec of the deployed RabbitMQ CRD. + +Let's create the `RabbitMQ` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/reprovision/mongo.yaml +RabbitMQ.kubedb.com/mongo created +``` + +## Apply Reprovision opsRequest + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: repro + namespace: demo +spec: + type: Reprovision + databaseRef: + name: mongo + apply: Always +``` + +- `spec.type` specifies the Type of the ops Request +- `spec.databaseRef` holds the name of the RabbitMQ database. The db should be available in the same namespace as the opsRequest +- `spec.apply` is set to Always to denote that, we want reprovisioning even if the db was not Ready. + +> Note: The method of reprovisioning the standalone & sharded db is exactly same as above. All you need, is to specify the corresponding RabbitMQ name in `spec.databaseRef.name` section. + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/reprovision/ops.yaml +RabbitMQopsrequest.ops.kubedb.com/repro created +``` + +Now the Ops-manager operator will +1) Pause the DB +2) Delete all statefulsets +3) Remove `Provisioned` condition from db +4) Reconcile the db for start +5) Wait for DB to be Ready. + +```shell +$ kubectl get mgops -n demo +NAME TYPE STATUS AGE +repro Reprovision Successful 2m + + +$ kubectl get mgops -n demo -oyaml repro +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"ops.kubedb.com/v1alpha1","kind":"RabbitMQOpsRequest","metadata":{"annotations":{},"name":"repro","namespace":"demo"},"spec":{"databaseRef":{"name":"mongo"},"type":"Reprovision"}} + creationTimestamp: "2022-10-31T09:50:35Z" + generation: 1 + name: repro + namespace: demo + resourceVersion: "743676" + uid: b3444d38-bef3-4043-925f-551fe6c86123 +spec: + apply: Always + databaseRef: + name: mongo + type: Reprovision +status: + conditions: + - lastTransitionTime: "2022-10-31T09:50:35Z" + message: RabbitMQ ops request is reprovisioning the database + observedGeneration: 1 + reason: Reprovision + status: "True" + type: Reprovision + - lastTransitionTime: "2022-10-31T09:50:45Z" + message: Successfully Deleted All the StatefulSets + observedGeneration: 1 + reason: DeleteStatefulSets + status: "True" + type: DeleteStatefulSets + - lastTransitionTime: "2022-10-31T09:52:05Z" + message: Database Phase is Ready + observedGeneration: 1 + reason: DatabaseReady + status: "True" + type: DatabaseReady + - lastTransitionTime: "2022-10-31T09:52:05Z" + message: Successfully Reprovisioned the database + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete RabbitMQopsrequest -n demo repro +kubectl delete RabbitMQ -n demo mongo +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [RabbitMQ object](/docs/guides/RabbitMQ/concepts/RabbitMQ.md). +- Initialize [RabbitMQ with Script](/docs/guides/RabbitMQ/initialization/using-script.md). +- Monitor your RabbitMQ database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/RabbitMQ/monitoring/using-prometheus-operator.md). +- Monitor your RabbitMQ database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/RabbitMQ/monitoring/using-builtin-prometheus.md). +- Use [private Docker registry](/docs/guides/RabbitMQ/private-registry/using-private-registry.md) to deploy RabbitMQ with KubeDB. +- Use [kubedb cli](/docs/guides/RabbitMQ/cli/cli.md) to manage databases like kubectl for Kubernetes. +- Detail concepts of [RabbitMQ object](/docs/guides/RabbitMQ/concepts/RabbitMQ.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/rabbitmq/restart/_index.md b/docs/guides/rabbitmq/restart/_index.md new file mode 100644 index 0000000000..3c0b6e841c --- /dev/null +++ b/docs/guides/rabbitmq/restart/_index.md @@ -0,0 +1,10 @@ +--- +title: Restart RabbitMQ +menu: + docs_{{ .version }}: + identifier: mg-restart + name: Restart + parent: mg-RabbitMQ-guides + weight: 46 +menu_name: docs_{{ .version }} +--- diff --git a/docs/guides/rabbitmq/restart/restart.md b/docs/guides/rabbitmq/restart/restart.md new file mode 100644 index 0000000000..f12126bb6a --- /dev/null +++ b/docs/guides/rabbitmq/restart/restart.md @@ -0,0 +1,196 @@ +--- +title: Restart RabbitMQ +menu: + docs_{{ .version }}: + identifier: mg-restart-details + name: Restart RabbitMQ + parent: mg-restart + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Restart RabbitMQ + +KubeDB supports restarting the RabbitMQ database via a RabbitMQOpsRequest. Restarting is useful if some pods are got stuck in some phase, or they are not working correctly. This tutorial will show you how to use that. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/RabbitMQ) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy RabbitMQ + +In this section, we are going to deploy a RabbitMQ database using KubeDB. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mongo + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "replicaset" + podTemplate: + spec: + resources: + requests: + cpu: "300m" + memory: "300Mi" + replicas: 2 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + terminationPolicy: WipeOut + arbiter: {} + hidden: + replicas: 2 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 2Gi +``` + +- `spec.replicaSet` represents the configuration for replicaset. + - `name` denotes the name of RabbitMQ replicaset. +- `spec.replicas` denotes the number of general members in `rs0` RabbitMQ replicaset. +- `spec.podTemplate` denotes specifications of all the 3 general replicaset members. +- `spec.ephemeralStorage` holds the emptyDir volume specifications. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. So, each members will have a pod of this ephemeral storage configuration. +- `spec.arbiter` denotes arbiter-node spec of the deployed RabbitMQ CRD. +- `spec.hidden` denotes hidden-node spec of the deployed RabbitMQ CRD. + +Let's create the `RabbitMQ` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/restart/mongo.yaml +RabbitMQ.kubedb.com/mongo created +``` + +## Apply Restart opsRequest + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: restart + namespace: demo +spec: + type: Restart + databaseRef: + name: mongo + readinessCriteria: + oplogMaxLagSeconds: 10 + objectsCountDiffPercentage: 15 + timeout: 3m + apply: Always +``` + +- `spec.type` specifies the Type of the ops Request +- `spec.databaseRef` holds the name of the RabbitMQ database. The db should be available in the same namespace as the opsRequest +- The meaning of`spec.readinessCriteria`, `spec.timeout` & `spec.apply` fields will be found [here](/docs/guides/RabbitMQ/concepts/opsrequest.md#specreadinessCriteria) + +> Note: The method of restarting the standalone & sharded db is exactly same as above. All you need, is to specify the corresponding RabbitMQ name in `spec.databaseRef.name` section. + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/restart/ops.yaml +RabbitMQopsrequest.ops.kubedb.com/restart created +``` + +Now the Ops-manager operator will first restart the general secondary pods, then serially the arbiters, the hidden nodes, & lastly will restart the Primary of the database. + +```shell +$ kubectl get mgops -n demo +NAME TYPE STATUS AGE +restart Restart Successful 10m + +$ kubectl get mgops -n demo -oyaml restart +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"ops.kubedb.com/v1alpha1","kind":"RabbitMQOpsRequest","metadata":{"annotations":{},"name":"restart","namespace":"demo"},"spec":{"apply":"Always","databaseRef":{"name":"mongo"},"readinessCriteria":{"objectsCountDiffPercentage":15,"oplogMaxLagSeconds":10},"timeout":"3m","type":"Restart"}} + creationTimestamp: "2022-10-31T08:54:45Z" + generation: 1 + name: restart + namespace: demo + resourceVersion: "738625" + uid: 32f6c52f-6114-4e25-b3a1-877223cf7145 +spec: + apply: Always + databaseRef: + name: mongo + readinessCriteria: + objectsCountDiffPercentage: 15 + oplogMaxLagSeconds: 10 + timeout: 3m + type: Restart +status: + conditions: + - lastTransitionTime: "2022-10-31T08:54:45Z" + message: RabbitMQ ops request is restarting the database nodes + observedGeneration: 1 + reason: Restart + status: "True" + type: Restart + - lastTransitionTime: "2022-10-31T08:57:05Z" + message: Successfully Restarted ReplicaSet nodes + observedGeneration: 1 + reason: RestartReplicaSet + status: "True" + type: RestartReplicaSet + - lastTransitionTime: "2022-10-31T08:57:05Z" + message: Successfully restarted all nodes of RabbitMQ + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete RabbitMQopsrequest -n demo restart +kubectl delete RabbitMQ -n demo mongo +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [RabbitMQ object](/docs/guides/RabbitMQ/concepts/RabbitMQ.md). +- Initialize [RabbitMQ with Script](/docs/guides/RabbitMQ/initialization/using-script.md). +- Monitor your RabbitMQ database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/RabbitMQ/monitoring/using-prometheus-operator.md). +- Monitor your RabbitMQ database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/RabbitMQ/monitoring/using-builtin-prometheus.md). +- Use [private Docker registry](/docs/guides/RabbitMQ/private-registry/using-private-registry.md) to deploy RabbitMQ with KubeDB. +- Use [kubedb cli](/docs/guides/RabbitMQ/cli/cli.md) to manage databases like kubectl for Kubernetes. +- Detail concepts of [RabbitMQ object](/docs/guides/RabbitMQ/concepts/RabbitMQ.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/rabbitmq/scaling/_index.md b/docs/guides/rabbitmq/scaling/_index.md new file mode 100644 index 0000000000..e5cd7b6f39 --- /dev/null +++ b/docs/guides/rabbitmq/scaling/_index.md @@ -0,0 +1,10 @@ +--- +title: Scaling RabbitMQ +menu: + docs_{{ .version }}: + identifier: mg-scaling + name: Scaling + parent: mg-RabbitMQ-guides + weight: 43 +menu_name: docs_{{ .version }} +--- \ No newline at end of file diff --git a/docs/guides/rabbitmq/scaling/horizontal-scaling/_index.md b/docs/guides/rabbitmq/scaling/horizontal-scaling/_index.md new file mode 100644 index 0000000000..ecf4c604a7 --- /dev/null +++ b/docs/guides/rabbitmq/scaling/horizontal-scaling/_index.md @@ -0,0 +1,10 @@ +--- +title: Horizontal Scaling +menu: + docs_{{ .version }}: + identifier: mg-horizontal-scaling + name: Horizontal Scaling + parent: mg-scaling + weight: 10 +menu_name: docs_{{ .version }} +--- \ No newline at end of file diff --git a/docs/guides/rabbitmq/scaling/horizontal-scaling/overview.md b/docs/guides/rabbitmq/scaling/horizontal-scaling/overview.md new file mode 100644 index 0000000000..80b5acd76e --- /dev/null +++ b/docs/guides/rabbitmq/scaling/horizontal-scaling/overview.md @@ -0,0 +1,54 @@ +--- +title: RabbitMQ Horizontal Scaling Overview +menu: + docs_{{ .version }}: + identifier: mg-horizontal-scaling-overview + name: Overview + parent: mg-horizontal-scaling + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# RabbitMQ Horizontal Scaling + +This guide will give an overview on how KubeDB Ops-manager operator scales up or down `RabbitMQ` database replicas of various component such as ReplicaSet, Shard, ConfigServer, Mongos, etc. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + +## How Horizontal Scaling Process Works + +The following diagram shows how KubeDB Ops-manager operator scales up or down `RabbitMQ` database components. Open the image in a new tab to see the enlarged version. + +
+  Horizontal scaling process of RabbitMQ +
Fig: Horizontal scaling process of RabbitMQ
+
+ +The Horizontal scaling process consists of the following steps: + +1. At first, a user creates a `RabbitMQ` Custom Resource (CR). + +2. `KubeDB` Provisioner operator watches the `RabbitMQ` CR. + +3. When the operator finds a `RabbitMQ` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to scale the various components (ie. ReplicaSet, Shard, ConfigServer, Mongos, etc.) of the `RabbitMQ` database the user creates a `RabbitMQOpsRequest` CR with desired information. + +5. `KubeDB` Ops-manager operator watches the `RabbitMQOpsRequest` CR. + +6. When it finds a `RabbitMQOpsRequest` CR, it halts the `RabbitMQ` object which is referred from the `RabbitMQOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `RabbitMQ` object during the horizontal scaling process. + +7. Then the `KubeDB` Ops-manager operator will scale the related StatefulSet Pods to reach the expected number of replicas defined in the `RabbitMQOpsRequest` CR. + +8. After the successfully scaling the replicas of the related StatefulSet Pods, the `KubeDB` Ops-manager operator updates the number of replicas in the `RabbitMQ` object to reflect the updated state. + +9. After the successful scaling of the `RabbitMQ` replicas, the `KubeDB` Ops-manager operator resumes the `RabbitMQ` object so that the `KubeDB` Provisioner operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on horizontal scaling of RabbitMQ database using `RabbitMQOpsRequest` CRD. \ No newline at end of file diff --git a/docs/guides/rabbitmq/scaling/horizontal-scaling/replicaset.md b/docs/guides/rabbitmq/scaling/horizontal-scaling/replicaset.md new file mode 100644 index 0000000000..def80565f5 --- /dev/null +++ b/docs/guides/rabbitmq/scaling/horizontal-scaling/replicaset.md @@ -0,0 +1,692 @@ +--- +title: Horizontal Scaling RabbitMQ Replicaset +menu: + docs_{{ .version }}: + identifier: mg-horizontal-scaling-replicaset + name: Replicaset + parent: mg-horizontal-scaling + weight: 20 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Horizontal Scale RabbitMQ Replicaset + +This guide will show you how to use `KubeDB` Ops-manager operator to scale the replicaset of a RabbitMQ database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [Replicaset](/docs/guides/RabbitMQ/clustering/replicaset.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + - [Horizontal Scaling Overview](/docs/guides/RabbitMQ/scaling/horizontal-scaling/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](/docs/examples/RabbitMQ) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Apply Horizontal Scaling on Replicaset + +Here, we are going to deploy a `RabbitMQ` replicaset using a supported version by `KubeDB` operator. Then we are going to apply horizontal scaling on it. + +### Prepare RabbitMQ Replicaset Database + +Now, we are going to deploy a `RabbitMQ` replicaset database with version `4.4.26`. + +### Deploy RabbitMQ replicaset + +In this section, we are going to deploy a RabbitMQ replicaset database. Then, in the next section we will scale the database using `RabbitMQOpsRequest` CRD. Below is the YAML of the `RabbitMQ` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mg-replicaset + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "replicaset" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the `RabbitMQ` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/scaling/mg-replicaset.yaml +RabbitMQ.kubedb.com/mg-replicaset created +``` + +Now, wait until `mg-replicaset` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-replicaset 4.4.26 Ready 2m36s +``` + +Let's check the number of replicas this database has from the RabbitMQ object, number of pods the statefulset have, + +```bash +$ kubectl get RabbitMQ -n demo mg-replicaset -o json | jq '.spec.replicas' +3 + +$ kubectl get sts -n demo mg-replicaset -o json | jq '.spec.replicas' +3 +``` + +We can see from both command that the database has 3 replicas in the replicaset. + +Also, we can verify the replicas of the replicaset from an internal RabbitMQ command by execing into a replica. + +First we need to get the username and password to connect to a RabbitMQ instance, +```bash +$ kubectl get secrets -n demo mg-replicaset-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo mg-replicaset-auth -o jsonpath='{.data.\password}' | base64 -d +nrKuxni0wDSMrgwy +``` + +Now let's connect to a RabbitMQ instance and run a RabbitMQ internal command to check the number of replicas, + +```bash +$ kubectl exec -n demo mg-replicaset-0 -- mongo admin -u root -p nrKuxni0wDSMrgwy --eval "db.adminCommand( { replSetGetStatus : 1 } ).members" --quiet +[ + { + "_id" : 0, + "name" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 1, + "stateStr" : "PRIMARY", + "uptime" : 171, + "optime" : { + "ts" : Timestamp(1614698544, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:22:24Z"), + "syncingTo" : "", + "syncSourceHost" : "", + "syncSourceId" : -1, + "infoMessage" : "", + "electionTime" : Timestamp(1614698393, 2), + "electionDate" : ISODate("2021-03-02T15:19:53Z"), + "configVersion" : 3, + "self" : true, + "lastHeartbeatMessage" : "" + }, + { + "_id" : 1, + "name" : "mg-replicaset-1.mg-replicaset-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 128, + "optime" : { + "ts" : Timestamp(1614698544, 1), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1614698544, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:22:24Z"), + "optimeDurableDate" : ISODate("2021-03-02T15:22:24Z"), + "lastHeartbeat" : ISODate("2021-03-02T15:22:32.411Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T15:22:31.543Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 3 + }, + { + "_id" : 2, + "name" : "mg-replicaset-2.mg-replicaset-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 83, + "optime" : { + "ts" : Timestamp(1614698544, 1), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1614698544, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:22:24Z"), + "optimeDurableDate" : ISODate("2021-03-02T15:22:24Z"), + "lastHeartbeat" : ISODate("2021-03-02T15:22:30.615Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T15:22:31.543Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 3 + } +] +``` + +We can see from the above output that the replicaset has 3 nodes. + +We are now ready to apply the `RabbitMQOpsRequest` CR to scale this database. + +## Scale Up Replicas + +Here, we are going to scale up the replicas of the replicaset to meet the desired number of replicas after scaling. + +#### Create RabbitMQOpsRequest + +In order to scale up the replicas of the replicaset of the database, we have to create a `RabbitMQOpsRequest` CR with our desired replicas. Below is the YAML of the `RabbitMQOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-hscale-up-replicaset + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: mg-replicaset + horizontalScaling: + replicas: 4 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing horizontal scaling operation on `mops-hscale-up-replicaset` database. +- `spec.type` specifies that we are performing `HorizontalScaling` on our database. +- `spec.horizontalScaling.replicas` specifies the desired replicas after scaling. + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/scaling/horizontal-scaling/mops-hscale-up-replicaset.yaml +RabbitMQopsrequest.ops.kubedb.com/mops-hscale-up-replicaset created +``` + +#### Verify Replicaset replicas scaled up successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the replicas of `RabbitMQ` object and related `StatefulSets` and `Pods`. + +Let's wait for `RabbitMQOpsRequest` to be `Successful`. Run the following command to watch `RabbitMQOpsRequest` CR, + +```bash +$ watch kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-hscale-up-replicaset HorizontalScaling Successful 106s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to scale the database. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-hscale-up-replicaset +Name: mops-hscale-up-replicaset +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2021-03-02T15:23:14Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:horizontalScaling: + .: + f:replicas: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-02T15:23:14Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-02T15:23:14Z + Resource Version: 129882 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/RabbitMQopsrequests/mops-hscale-up-replicaset + UID: e97dac5c-5e3a-4153-9b31-8ba02af54bcb +Spec: + Database Ref: + Name: mg-replicaset + Horizontal Scaling: + Replicas: 4 + Type: HorizontalScaling +Status: + Conditions: + Last Transition Time: 2021-03-02T15:23:14Z + Message: RabbitMQ ops request is horizontally scaling database + Observed Generation: 1 + Reason: HorizontalScaling + Status: True + Type: HorizontalScaling + Last Transition Time: 2021-03-02T15:24:00Z + Message: Successfully Horizontally Scaled Up ReplicaSet + Observed Generation: 1 + Reason: ScaleUpReplicaSet + Status: True + Type: ScaleUpReplicaSet + Last Transition Time: 2021-03-02T15:24:00Z + Message: Successfully Horizontally Scaled RabbitMQ + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 91s KubeDB Ops-manager operator Pausing RabbitMQ demo/mg-replicaset + Normal PauseDatabase 91s KubeDB Ops-manager operator Successfully paused RabbitMQ demo/mg-replicaset + Normal ScaleUpReplicaSet 45s KubeDB Ops-manager operator Successfully Horizontally Scaled Up ReplicaSet + Normal ResumeDatabase 45s KubeDB Ops-manager operator Resuming RabbitMQ demo/mg-replicaset + Normal ResumeDatabase 45s KubeDB Ops-manager operator Successfully resumed RabbitMQ demo/mg-replicaset + Normal Successful 45s KubeDB Ops-manager operator Successfully Horizontally Scaled Database +``` + +Now, we are going to verify the number of replicas this database has from the RabbitMQ object, number of pods the statefulset have, + +```bash +$ kubectl get RabbitMQ -n demo mg-replicaset -o json | jq '.spec.replicas' +4 + +$ kubectl get sts -n demo mg-replicaset -o json | jq '.spec.replicas' +4 +``` + +Now let's connect to a RabbitMQ instance and run a RabbitMQ internal command to check the number of replicas, +```bash +$ kubectl exec -n demo mg-replicaset-0 -- mongo admin -u root -p nrKuxni0wDSMrgwy --eval "db.adminCommand( { replSetGetStatus : 1 } ).members" --quiet +[ + { + "_id" : 0, + "name" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 1, + "stateStr" : "PRIMARY", + "uptime" : 344, + "optime" : { + "ts" : Timestamp(1614698724, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:25:24Z"), + "syncingTo" : "", + "syncSourceHost" : "", + "syncSourceId" : -1, + "infoMessage" : "", + "electionTime" : Timestamp(1614698393, 2), + "electionDate" : ISODate("2021-03-02T15:19:53Z"), + "configVersion" : 4, + "self" : true, + "lastHeartbeatMessage" : "" + }, + { + "_id" : 1, + "name" : "mg-replicaset-1.mg-replicaset-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 301, + "optime" : { + "ts" : Timestamp(1614698712, 2), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1614698712, 2), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:25:12Z"), + "optimeDurableDate" : ISODate("2021-03-02T15:25:12Z"), + "lastHeartbeat" : ISODate("2021-03-02T15:25:23.889Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T15:25:25.179Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 4 + }, + { + "_id" : 2, + "name" : "mg-replicaset-2.mg-replicaset-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 256, + "optime" : { + "ts" : Timestamp(1614698712, 2), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1614698712, 2), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:25:12Z"), + "optimeDurableDate" : ISODate("2021-03-02T15:25:12Z"), + "lastHeartbeat" : ISODate("2021-03-02T15:25:23.888Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T15:25:25.136Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 4 + }, + { + "_id" : 3, + "name" : "mg-replicaset-3.mg-replicaset-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 93, + "optime" : { + "ts" : Timestamp(1614698712, 2), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1614698712, 2), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:25:12Z"), + "optimeDurableDate" : ISODate("2021-03-02T15:25:12Z"), + "lastHeartbeat" : ISODate("2021-03-02T15:25:23.926Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T15:25:24.089Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 4 + } +] +``` + +From all the above outputs we can see that the replicas of the replicaset is `4`. That means we have successfully scaled up the replicas of the RabbitMQ replicaset. + + +### Scale Down Replicas + +Here, we are going to scale down the replicas of the replicaset to meet the desired number of replicas after scaling. + +#### Create RabbitMQOpsRequest + +In order to scale down the replicas of the replicaset of the database, we have to create a `RabbitMQOpsRequest` CR with our desired replicas. Below is the YAML of the `RabbitMQOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-hscale-down-replicaset + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: mg-replicaset + horizontalScaling: + replicas: 3 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing horizontal scaling down operation on `mops-hscale-down-replicaset` database. +- `spec.type` specifies that we are performing `HorizontalScaling` on our database. +- `spec.horizontalScaling.replicas` specifies the desired replicas after scaling. + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/scaling/horizontal-scaling/mops-hscale-down-replicaset.yaml +RabbitMQopsrequest.ops.kubedb.com/mops-hscale-down-replicaset created +``` + +#### Verify Replicaset replicas scaled down successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the replicas of `RabbitMQ` object and related `StatefulSets` and `Pods`. + +Let's wait for `RabbitMQOpsRequest` to be `Successful`. Run the following command to watch `RabbitMQOpsRequest` CR, + +```bash +$ watch kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-hscale-down-replicaset HorizontalScaling Successful 2m32s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to scale the database. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-hscale-down-replicaset +Name: mops-hscale-down-replicaset +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2021-03-02T15:25:57Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:horizontalScaling: + .: + f:replicas: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-02T15:25:57Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-02T15:25:57Z + Resource Version: 130393 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/RabbitMQopsrequests/mops-hscale-down-replicaset + UID: fbfee7f8-1dd5-4f58-aad7-ad2e2d66b295 +Spec: + Database Ref: + Name: mg-replicaset + Horizontal Scaling: + Replicas: 3 + Type: HorizontalScaling +Status: + Conditions: + Last Transition Time: 2021-03-02T15:25:57Z + Message: RabbitMQ ops request is horizontally scaling database + Observed Generation: 1 + Reason: HorizontalScaling + Status: True + Type: HorizontalScaling + Last Transition Time: 2021-03-02T15:26:17Z + Message: Successfully Horizontally Scaled Down ReplicaSet + Observed Generation: 1 + Reason: ScaleDownReplicaSet + Status: True + Type: ScaleDownReplicaSet + Last Transition Time: 2021-03-02T15:26:17Z + Message: Successfully Horizontally Scaled RabbitMQ + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 50s KubeDB Ops-manager operator Pausing RabbitMQ demo/mg-replicaset + Normal PauseDatabase 50s KubeDB Ops-manager operator Successfully paused RabbitMQ demo/mg-replicaset + Normal ScaleDownReplicaSet 30s KubeDB Ops-manager operator Successfully Horizontally Scaled Down ReplicaSet + Normal ResumeDatabase 30s KubeDB Ops-manager operator Resuming RabbitMQ demo/mg-replicaset + Normal ResumeDatabase 30s KubeDB Ops-manager operator Successfully resumed RabbitMQ demo/mg-replicaset + Normal Successful 30s KubeDB Ops-manager operator Successfully Horizontally Scaled Database +``` + +Now, we are going to verify the number of replicas this database has from the RabbitMQ object, number of pods the statefulset have, + +```bash +$ kubectl get RabbitMQ -n demo mg-replicaset -o json | jq '.spec.replicas' +3 + +$ kubectl get sts -n demo mg-replicaset -o json | jq '.spec.replicas' +3 +``` + +Now let's connect to a RabbitMQ instance and run a RabbitMQ internal command to check the number of replicas, +```bash +$ kubectl exec -n demo mg-replicaset-0 -- mongo admin -u root -p nrKuxni0wDSMrgwy --eval "db.adminCommand( { replSetGetStatus : 1 } ).members" --quiet +[ + { + "_id" : 0, + "name" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 1, + "stateStr" : "PRIMARY", + "uptime" : 410, + "optime" : { + "ts" : Timestamp(1614698784, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:26:24Z"), + "syncingTo" : "", + "syncSourceHost" : "", + "syncSourceId" : -1, + "infoMessage" : "", + "electionTime" : Timestamp(1614698393, 2), + "electionDate" : ISODate("2021-03-02T15:19:53Z"), + "configVersion" : 5, + "self" : true, + "lastHeartbeatMessage" : "" + }, + { + "_id" : 1, + "name" : "mg-replicaset-1.mg-replicaset-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 367, + "optime" : { + "ts" : Timestamp(1614698784, 1), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1614698784, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:26:24Z"), + "optimeDurableDate" : ISODate("2021-03-02T15:26:24Z"), + "lastHeartbeat" : ISODate("2021-03-02T15:26:29.423Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T15:26:29.330Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 5 + }, + { + "_id" : 2, + "name" : "mg-replicaset-2.mg-replicaset-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 322, + "optime" : { + "ts" : Timestamp(1614698784, 1), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1614698784, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:26:24Z"), + "optimeDurableDate" : ISODate("2021-03-02T15:26:24Z"), + "lastHeartbeat" : ISODate("2021-03-02T15:26:31.022Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T15:26:31.224Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-replicaset-0.mg-replicaset-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 5 + } +] +``` + +From all the above outputs we can see that the replicas of the replicaset is `3`. That means we have successfully scaled down the replicas of the RabbitMQ replicaset. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-replicaset +kubectl delete RabbitMQopsrequest -n demo mops-vscale-replicaset +``` \ No newline at end of file diff --git a/docs/guides/rabbitmq/scaling/horizontal-scaling/sharding.md b/docs/guides/rabbitmq/scaling/horizontal-scaling/sharding.md new file mode 100644 index 0000000000..85daf6bc49 --- /dev/null +++ b/docs/guides/rabbitmq/scaling/horizontal-scaling/sharding.md @@ -0,0 +1,1436 @@ +--- +title: Horizontal Scaling RabbitMQ Shard +menu: + docs_{{ .version }}: + identifier: mg-horizontal-scaling-shard + name: Sharding + parent: mg-horizontal-scaling + weight: 30 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Horizontal Scale RabbitMQ Shard + +This guide will show you how to use `KubeDB` Ops-manager operator to scale the shard of a RabbitMQ database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [Sharding](/docs/guides/RabbitMQ/clustering/sharding.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + - [Horizontal Scaling Overview](/docs/guides/RabbitMQ/scaling/horizontal-scaling/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](/docs/examples/RabbitMQ) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Apply Horizontal Scaling on Sharded Database + +Here, we are going to deploy a `RabbitMQ` sharded database using a supported version by `KubeDB` operator. Then we are going to apply horizontal scaling on it. + +### Prepare RabbitMQ Sharded Database + +Now, we are going to deploy a `RabbitMQ` sharded database with version `4.4.26`. + +### Deploy RabbitMQ Sharded Database + +In this section, we are going to deploy a RabbitMQ sharded database. Then, in the next sections we will scale shards of the database using `RabbitMQOpsRequest` CRD. Below is the YAML of the `RabbitMQ` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mg-sharding + namespace: demo +spec: + version: 4.4.26 + shardTopology: + configServer: + replicas: 3 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + replicas: 2 + shard: + replicas: 3 + shards: 2 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard +``` + +Let's create the `RabbitMQ` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/scaling/mg-shard.yaml +RabbitMQ.kubedb.com/mg-sharding created +``` + +Now, wait until `mg-sharding` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-sharding 4.4.26 Ready 10m +``` + +##### Verify Number of Shard and Shard Replicas + +Let's check the number of shards this database from the RabbitMQ object and the number of statefulsets it has, + +```bash +$ kubectl get RabbitMQ -n demo mg-sharding -o json | jq '.spec.shardTopology.shard.shards' +2 + +$ kubectl get sts -n demo +NAME READY AGE +mg-sharding-configsvr 3/3 23m +mg-sharding-mongos 2/2 22m +mg-sharding-shard0 3/3 23m +mg-sharding-shard1 3/3 23m +``` + +So, We can see from the both output that the database has 2 shards. + +Now, Let's check the number of replicas each shard has from the RabbitMQ object and the number of pod the statefulsets have, + +```bash +$ kubectl get RabbitMQ -n demo mg-sharding -o json | jq '.spec.shardTopology.shard.replicas' +3 + +$ kubectl get sts -n demo mg-sharding-shard0 -o json | jq '.spec.replicas' +3 +``` + +We can see from both output that the database has 3 replicas in each shards. + +Also, we can verify the number of shard from an internal RabbitMQ command by execing into a mongos node. + +First we need to get the username and password to connect to a mongos instance, +```bash +$ kubectl get secrets -n demo mg-sharding-auth -o jsonpath='{.data.\username}' | base64 -d +root + +$ kubectl get secrets -n demo mg-sharding-auth -o jsonpath='{.data.\password}' | base64 -d +xBC-EwMFivFCgUlK +``` + +Now let's connect to a mongos instance and run a RabbitMQ internal command to check the number of shards, + +```bash +$ kubectl exec -n demo mg-sharding-mongos-0 -- mongo admin -u root -p xBC-EwMFivFCgUlK --eval "sh.status()" --quiet +--- Sharding Status --- + sharding version: { + "_id" : 1, + "minCompatibleVersion" : 5, + "currentVersion" : 6, + "clusterId" : ObjectId("603e5a4bec470e6b4197e10b") + } + shards: + { "_id" : "shard0", "host" : "shard0/mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-2.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", "state" : 1 } + { "_id" : "shard1", "host" : "shard1/mg-sharding-shard1-0.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-1.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-2.mg-sharding-shard1-pods.demo.svc.cluster.local:27017", "state" : 1 } + active mongoses: + "4.4.26" : 2 + autosplit: + Currently enabled: yes + balancer: + Currently enabled: yes + Currently running: no + Failed balancer rounds in last 5 attempts: 0 + Migration Results for the last 24 hours: + No recent migrations + databases: + { "_id" : "config", "primary" : "config", "partitioned" : true } +``` + +We can see from the above output that the number of shard is 2. + +Also, we can verify the number of replicas each shard has from an internal RabbitMQ command by execing into a shard node. + +Now let's connect to a shard instance and run a RabbitMQ internal command to check the number of replicas, + +```bash +$ kubectl exec -n demo mg-sharding-shard0-0 -- mongo admin -u root -p xBC-EwMFivFCgUlK --eval "db.adminCommand( { replSetGetStatus : 1 } ).members" --quiet +[ + { + "_id" : 0, + "name" : "mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 1, + "stateStr" : "PRIMARY", + "uptime" : 338, + "optime" : { + "ts" : Timestamp(1614699416, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:36:56Z"), + "syncingTo" : "", + "syncSourceHost" : "", + "syncSourceId" : -1, + "infoMessage" : "", + "electionTime" : Timestamp(1614699092, 1), + "electionDate" : ISODate("2021-03-02T15:31:32Z"), + "configVersion" : 3, + "self" : true, + "lastHeartbeatMessage" : "" + }, + { + "_id" : 1, + "name" : "mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 291, + "optime" : { + "ts" : Timestamp(1614699413, 1), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1614699413, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:36:53Z"), + "optimeDurableDate" : ISODate("2021-03-02T15:36:53Z"), + "lastHeartbeat" : ISODate("2021-03-02T15:36:56.692Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T15:36:56.015Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 3 + }, + { + "_id" : 2, + "name" : "mg-sharding-shard0-2.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 259, + "optime" : { + "ts" : Timestamp(1614699413, 1), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1614699413, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:36:53Z"), + "optimeDurableDate" : ISODate("2021-03-02T15:36:53Z"), + "lastHeartbeat" : ISODate("2021-03-02T15:36:56.732Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T15:36:57.773Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 3 + } +] +``` + +We can see from the above output that the number of replica is 3. + +##### Verify Number of ConfigServer + +Let's check the number of replicas this database has from the RabbitMQ object, number of pods the statefulset have, + +```bash +$ kubectl get RabbitMQ -n demo mg-sharding -o json | jq '.spec.shardTopology.configServer.replicas' +3 + +$ kubectl get sts -n demo mg-sharding-configsvr -o json | jq '.spec.replicas' +3 +``` + +We can see from both command that the database has `3` replicas in the configServer. + +Now let's connect to a RabbitMQ instance and run a RabbitMQ internal command to check the number of replicas, + +```bash +$ kubectl exec -n demo mg-sharding-configsvr-0 -- mongo admin -u root -p xBC-EwMFivFCgUlK --eval "db.adminCommand( { replSetGetStatus : 1 } ).members" --quiet +[ + { + "_id" : 0, + "name" : "mg-sharding-configsvr-0.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 1, + "stateStr" : "PRIMARY", + "uptime" : 423, + "optime" : { + "ts" : Timestamp(1614699492, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:38:12Z"), + "syncingTo" : "", + "syncSourceHost" : "", + "syncSourceId" : -1, + "infoMessage" : "", + "electionTime" : Timestamp(1614699081, 2), + "electionDate" : ISODate("2021-03-02T15:31:21Z"), + "configVersion" : 3, + "self" : true, + "lastHeartbeatMessage" : "" + }, + { + "_id" : 1, + "name" : "mg-sharding-configsvr-1.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 385, + "optime" : { + "ts" : Timestamp(1614699492, 1), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1614699492, 1), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:38:12Z"), + "optimeDurableDate" : ISODate("2021-03-02T15:38:12Z"), + "lastHeartbeat" : ISODate("2021-03-02T15:38:13.573Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T15:38:12.725Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-sharding-configsvr-0.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-configsvr-0.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 3 + }, + { + "_id" : 2, + "name" : "mg-sharding-configsvr-2.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 340, + "optime" : { + "ts" : Timestamp(1614699490, 8), + "t" : NumberLong(1) + }, + "optimeDurable" : { + "ts" : Timestamp(1614699490, 8), + "t" : NumberLong(1) + }, + "optimeDate" : ISODate("2021-03-02T15:38:10Z"), + "optimeDurableDate" : ISODate("2021-03-02T15:38:10Z"), + "lastHeartbeat" : ISODate("2021-03-02T15:38:11.665Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T15:38:11.827Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-sharding-configsvr-0.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-configsvr-0.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 0, + "infoMessage" : "", + "configVersion" : 3 + } +] +``` + +We can see from the above output that the configServer has 3 nodes. + +##### Verify Number of Mongos +Let's check the number of replicas this database has from the RabbitMQ object, number of pods the statefulset have, + +```bash +$ kubectl get RabbitMQ -n demo mg-sharding -o json | jq '.spec.shardTopology.mongos.replicas' +2 + +$ kubectl get sts -n demo mg-sharding-mongos -o json | jq '.spec.replicas' +2 +``` + +We can see from both command that the database has `2` replicas in the mongos. + +Now let's connect to a RabbitMQ instance and run a RabbitMQ internal command to check the number of replicas, + +```bash +$ kubectl exec -n demo mg-sharding-mongos-0 -- mongo admin -u root -p xBC-EwMFivFCgUlK --eval "sh.status()" --quiet +--- Sharding Status --- + sharding version: { + "_id" : 1, + "minCompatibleVersion" : 5, + "currentVersion" : 6, + "clusterId" : ObjectId("603e5a4bec470e6b4197e10b") + } + shards: + { "_id" : "shard0", "host" : "shard0/mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-2.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", "state" : 1 } + { "_id" : "shard1", "host" : "shard1/mg-sharding-shard1-0.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-1.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-2.mg-sharding-shard1-pods.demo.svc.cluster.local:27017", "state" : 1 } + active mongoses: + "4.4.26" : 2 + autosplit: + Currently enabled: yes + balancer: + Currently enabled: yes + Currently running: no + Failed balancer rounds in last 5 attempts: 0 + Migration Results for the last 24 hours: + No recent migrations + databases: + { "_id" : "config", "primary" : "config", "partitioned" : true } +``` + +We can see from the above output that the mongos has 2 active nodes. + +We are now ready to apply the `RabbitMQOpsRequest` CR to update scale up and down all the components of the database. + +### Scale Up + +Here, we are going to scale up all the components of the database to meet the desired number of replicas after scaling. + +#### Create RabbitMQOpsRequest + +In order to scale up, we have to create a `RabbitMQOpsRequest` CR with our configuration. Below is the YAML of the `RabbitMQOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-hscale-up-shard + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: mg-sharding + horizontalScaling: + shard: + shards: 3 + replicas: 4 + mongos: + replicas: 3 + configServer: + replicas: 4 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing horizontal scaling operation on `mops-hscale-up-shard` database. +- `spec.type` specifies that we are performing `HorizontalScaling` on our database. +- `spec.horizontalScaling.shard.shards` specifies the desired number of shards after scaling. +- `spec.horizontalScaling.shard.replicas` specifies the desired number of replicas of each shard after scaling. +- `spec.horizontalScaling.mongos.replicas` specifies the desired replicas after scaling. +- `spec.horizontalScaling.configServer.replicas` specifies the desired replicas after scaling. + +> **Note:** If you don't want to scale all the components together, you can only specify the components (shard, configServer and mongos) that you want to scale. + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/scaling/horizontal-scaling/mops-hscale-up-shard.yaml +RabbitMQopsrequest.ops.kubedb.com/mops-hscale-up-shard created +``` + +#### Verify scaling up is successful + +If everything goes well, `KubeDB` Ops-manager operator will update the shard and replicas of `RabbitMQ` object and related `StatefulSets` and `Pods`. + +Let's wait for `RabbitMQOpsRequest` to be `Successful`. Run the following command to watch `RabbitMQOpsRequest` CR, + +```bash +$ watch kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-hscale-up-shard HorizontalScaling Successful 9m57s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to scale the database. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-hscale-up-shard +Name: mops-hscale-up-shard +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2021-03-02T16:23:16Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:horizontalScaling: + .: + f:configServer: + .: + f:replicas: + f:mongos: + .: + f:replicas: + f:shard: + .: + f:replicas: + f:shards: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-02T16:23:16Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-02T16:23:16Z + Resource Version: 147313 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/RabbitMQopsrequests/mops-hscale-up-shard + UID: 982014fc-1655-44e7-946c-859626ae0247 +Spec: + Database Ref: + Name: mg-sharding + Horizontal Scaling: + Config Server: + Replicas: 4 + Mongos: + Replicas: 3 + Shard: + Replicas: 4 + Shards: 3 + Type: HorizontalScaling +Status: + Conditions: + Last Transition Time: 2021-03-02T16:23:16Z + Message: RabbitMQ ops request is horizontally scaling database + Observed Generation: 1 + Reason: HorizontalScaling + Status: True + Type: HorizontalScaling + Last Transition Time: 2021-03-02T16:25:31Z + Message: Successfully Horizontally Scaled Up Shard Replicas + Observed Generation: 1 + Reason: ScaleUpShardReplicas + Status: True + Type: ScaleUpShardReplicas + Last Transition Time: 2021-03-02T16:33:07Z + Message: Successfully Horizontally Scaled Up Shard + Observed Generation: 1 + Reason: ScaleUpShard + Status: True + Type: ScaleUpShard + Last Transition Time: 2021-03-02T16:34:35Z + Message: Successfully Horizontally Scaled Up ConfigServer + Observed Generation: 1 + Reason: ScaleUpConfigServer + Status: True + Type: ScaleUpConfigServer + Last Transition Time: 2021-03-02T16:36:30Z + Message: Successfully Horizontally Scaled Mongos + Observed Generation: 1 + Reason: ScaleMongos + Status: True + Type: ScaleMongos + Last Transition Time: 2021-03-02T16:36:30Z + Message: Successfully Horizontally Scaled RabbitMQ + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 13m KubeDB Ops-manager operator Pausing RabbitMQ demo/mg-sharding + Normal PauseDatabase 13m KubeDB Ops-manager operator Successfully paused RabbitMQ demo/mg-sharding + Normal ScaleUpShardReplicas 11m KubeDB Ops-manager operator Successfully Horizontally Scaled Up Shard Replicas + Normal ResumeDatabase 11m KubeDB Ops-manager operator Resuming RabbitMQ demo/mg-sharding + Normal ResumeDatabase 11m KubeDB Ops-manager operator Successfully resumed RabbitMQ demo/mg-sharding + Normal ScaleUpShardReplicas 11m KubeDB Ops-manager operator Successfully Horizontally Scaled Up Shard Replicas + Normal ScaleUpShardReplicas 11m KubeDB Ops-manager operator Successfully Horizontally Scaled Up Shard Replicas + Normal Progressing 8m20s KubeDB Ops-manager operator Successfully updated StatefulSets Resources + Normal Progressing 4m5s KubeDB Ops-manager operator Successfully updated StatefulSets Resources + Normal ScaleUpShard 3m59s KubeDB Ops-manager operator Successfully Horizontally Scaled Up Shard + Normal PauseDatabase 3m59s KubeDB Ops-manager operator Pausing RabbitMQ demo/mg-sharding + Normal PauseDatabase 3m59s KubeDB Ops-manager operator Successfully paused RabbitMQ demo/mg-sharding + Normal ScaleUpConfigServer 2m31s KubeDB Ops-manager operator Successfully Horizontally Scaled Up ConfigServer + Normal ScaleMongos 36s KubeDB Ops-manager operator Successfully Horizontally Scaled Mongos + Normal ResumeDatabase 36s KubeDB Ops-manager operator Resuming RabbitMQ demo/mg-sharding + Normal ResumeDatabase 36s KubeDB Ops-manager operator Successfully resumed RabbitMQ demo/mg-sharding + Normal Successful 36s KubeDB Ops-manager operator Successfully Horizontally Scaled Database +``` + +#### Verify Number of Shard and Shard Replicas + +Now, we are going to verify the number of shards this database has from the RabbitMQ object, number of statefulsets it has, + +```bash +$ kubectl get RabbitMQ -n demo mg-sharding -o json | jq '.spec.shardTopology.shard.shards' +3 + +$ kubectl get sts -n demo +NAME READY AGE +mg-sharding-configsvr 4/4 66m +mg-sharding-mongos 3/3 64m +mg-sharding-shard0 4/4 66m +mg-sharding-shard1 4/4 66m +mg-sharding-shard2 4/4 12m +``` + +Now let's connect to a mongos instance and run a RabbitMQ internal command to check the number of shards, +```bash +$ kubectl exec -n demo mg-sharding-mongos-0 -- mongo admin -u root -p xBC-EwMFivFCgUlK --eval "sh.status()" --quiet +--- Sharding Status --- + sharding version: { + "_id" : 1, + "minCompatibleVersion" : 5, + "currentVersion" : 6, + "clusterId" : ObjectId("603e5a4bec470e6b4197e10b") + } + shards: + { "_id" : "shard0", "host" : "shard0/mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-2.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-3.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", "state" : 1 } + { "_id" : "shard1", "host" : "shard1/mg-sharding-shard1-0.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-1.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-2.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-3.mg-sharding-shard1-pods.demo.svc.cluster.local:27017", "state" : 1 } + { "_id" : "shard2", "host" : "shard2/mg-sharding-shard2-0.mg-sharding-shard2-pods.demo.svc.cluster.local:27017,mg-sharding-shard2-1.mg-sharding-shard2-pods.demo.svc.cluster.local:27017,mg-sharding-shard2-2.mg-sharding-shard2-pods.demo.svc.cluster.local:27017,mg-sharding-shard2-3.mg-sharding-shard2-pods.demo.svc.cluster.local:27017", "state" : 1 } + active mongoses: + "4.4.26" : 3 + autosplit: + Currently enabled: yes + balancer: + Currently enabled: yes + Currently running: no + Failed balancer rounds in last 5 attempts: 2 + Last reported error: Couldn't get a connection within the time limit + Time of Reported error: Tue Mar 02 2021 16:17:53 GMT+0000 (UTC) + Migration Results for the last 24 hours: + No recent migrations + databases: + { "_id" : "config", "primary" : "config", "partitioned" : true } + config.system.sessions + shard key: { "_id" : 1 } + unique: false + balancing: true + chunks: + shard0 1 + { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard0 Timestamp(1, 0) +``` + +From all the above outputs we can see that the number of shards are `3`. + +Now, we are going to verify the number of replicas each shard has from the RabbitMQ object, number of pods the statefulset have, + +```bash +$ kubectl get RabbitMQ -n demo mg-sharding -o json | jq '.spec.shardTopology.shard.replicas' +4 + +$ kubectl get sts -n demo mg-sharding-shard0 -o json | jq '.spec.replicas' +4 +``` + +Now let's connect to a shard instance and run a RabbitMQ internal command to check the number of replicas, +```bash +$ kubectl exec -n demo mg-sharding-shard0-0 -- mongo admin -u root -p xBC-EwMFivFCgUlK --eval "db.adminCommand( { replSetGetStatus : 1 } ).members" --quiet +[ + { + "_id" : 0, + "name" : "mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 1464, + "optime" : { + "ts" : Timestamp(1614703143, 10), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:39:03Z"), + "syncingTo" : "mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 1, + "infoMessage" : "", + "configVersion" : 4, + "self" : true, + "lastHeartbeatMessage" : "" + }, + { + "_id" : 1, + "name" : "mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 1, + "stateStr" : "PRIMARY", + "uptime" : 1433, + "optime" : { + "ts" : Timestamp(1614703143, 10), + "t" : NumberLong(2) + }, + "optimeDurable" : { + "ts" : Timestamp(1614703143, 10), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:39:03Z"), + "optimeDurableDate" : ISODate("2021-03-02T16:39:03Z"), + "lastHeartbeat" : ISODate("2021-03-02T16:39:07.800Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T16:39:08.087Z"), + "pingMs" : NumberLong(6), + "lastHeartbeatMessage" : "", + "syncingTo" : "", + "syncSourceHost" : "", + "syncSourceId" : -1, + "infoMessage" : "", + "electionTime" : Timestamp(1614701678, 2), + "electionDate" : ISODate("2021-03-02T16:14:38Z"), + "configVersion" : 4 + }, + { + "_id" : 2, + "name" : "mg-sharding-shard0-2.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 1433, + "optime" : { + "ts" : Timestamp(1614703143, 10), + "t" : NumberLong(2) + }, + "optimeDurable" : { + "ts" : Timestamp(1614703143, 10), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:39:03Z"), + "optimeDurableDate" : ISODate("2021-03-02T16:39:03Z"), + "lastHeartbeat" : ISODate("2021-03-02T16:39:08.575Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T16:39:08.580Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 1, + "infoMessage" : "", + "configVersion" : 4 + }, + { + "_id" : 3, + "name" : "mg-sharding-shard0-3.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 905, + "optime" : { + "ts" : Timestamp(1614703143, 10), + "t" : NumberLong(2) + }, + "optimeDurable" : { + "ts" : Timestamp(1614703143, 10), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:39:03Z"), + "optimeDurableDate" : ISODate("2021-03-02T16:39:03Z"), + "lastHeartbeat" : ISODate("2021-03-02T16:39:06.683Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T16:39:07.980Z"), + "pingMs" : NumberLong(10), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 1, + "infoMessage" : "", + "configVersion" : 4 + } +] +``` + +From all the above outputs we can see that the replicas of each shard has is `4`. + +#### Verify Number of ConfigServer Replicas +Now, we are going to verify the number of replicas this database has from the RabbitMQ object, number of pods the statefulset have, + +```bash +$ kubectl get RabbitMQ -n demo mg-sharding -o json | jq '.spec.shardTopology.configServer.replicas' +4 + +$ kubectl get sts -n demo mg-sharding-configsvr -o json | jq '.spec.replicas' +4 +``` + +Now let's connect to a RabbitMQ instance and run a RabbitMQ internal command to check the number of replicas, +```bash +$ kubectl exec -n demo mg-sharding-configsvr-0 -- mongo admin -u root -p xBC-EwMFivFCgUlK --eval "db.adminCommand( { replSetGetStatus : 1 } ).members" --quiet +[ + { + "_id" : 0, + "name" : "mg-sharding-configsvr-0.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 1639, + "optime" : { + "ts" : Timestamp(1614703138, 2), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:38:58Z"), + "syncingTo" : "mg-sharding-configsvr-2.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-configsvr-2.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 2, + "infoMessage" : "", + "configVersion" : 4, + "self" : true, + "lastHeartbeatMessage" : "" + }, + { + "_id" : 1, + "name" : "mg-sharding-configsvr-1.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 1, + "stateStr" : "PRIMARY", + "uptime" : 1623, + "optime" : { + "ts" : Timestamp(1614703138, 2), + "t" : NumberLong(2) + }, + "optimeDurable" : { + "ts" : Timestamp(1614703138, 2), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:38:58Z"), + "optimeDurableDate" : ISODate("2021-03-02T16:38:58Z"), + "lastHeartbeat" : ISODate("2021-03-02T16:38:58.979Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T16:38:59.291Z"), + "pingMs" : NumberLong(3), + "lastHeartbeatMessage" : "", + "syncingTo" : "", + "syncSourceHost" : "", + "syncSourceId" : -1, + "infoMessage" : "", + "electionTime" : Timestamp(1614701497, 2), + "electionDate" : ISODate("2021-03-02T16:11:37Z"), + "configVersion" : 4 + }, + { + "_id" : 2, + "name" : "mg-sharding-configsvr-2.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 1623, + "optime" : { + "ts" : Timestamp(1614703138, 2), + "t" : NumberLong(2) + }, + "optimeDurable" : { + "ts" : Timestamp(1614703138, 2), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:38:58Z"), + "optimeDurableDate" : ISODate("2021-03-02T16:38:58Z"), + "lastHeartbeat" : ISODate("2021-03-02T16:38:58.885Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T16:39:00.188Z"), + "pingMs" : NumberLong(3), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-sharding-configsvr-1.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-configsvr-1.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 1, + "infoMessage" : "", + "configVersion" : 4 + }, + { + "_id" : 3, + "name" : "mg-sharding-configsvr-3.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 296, + "optime" : { + "ts" : Timestamp(1614703138, 2), + "t" : NumberLong(2) + }, + "optimeDurable" : { + "ts" : Timestamp(1614703138, 2), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:38:58Z"), + "optimeDurableDate" : ISODate("2021-03-02T16:38:58Z"), + "lastHeartbeat" : ISODate("2021-03-02T16:38:58.977Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T16:39:00.276Z"), + "pingMs" : NumberLong(1), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-sharding-configsvr-1.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-configsvr-1.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 1, + "infoMessage" : "", + "configVersion" : 4 + } +] +``` + +From all the above outputs we can see that the replicas of the configServer is `3`. That means we have successfully scaled up the replicas of the RabbitMQ configServer replicas. + +#### Verify Number of Mongos Replicas +Now, we are going to verify the number of replicas this database has from the RabbitMQ object, number of pods the statefulset have, + +```bash +$ kubectl get RabbitMQ -n demo mg-sharding -o json | jq '.spec.shardTopology.mongos.replicas' +3 + +$ kubectl get sts -n demo mg-sharding-mongos -o json | jq '.spec.replicas' +3 +``` + +Now let's connect to a RabbitMQ instance and run a RabbitMQ internal command to check the number of replicas, +```bash +$ kubectl exec -n demo mg-sharding-mongos-0 -- mongo admin -u root -p xBC-EwMFivFCgUlK --eval "sh.status()" --quiet +--- Sharding Status --- + sharding version: { + "_id" : 1, + "minCompatibleVersion" : 5, + "currentVersion" : 6, + "clusterId" : ObjectId("603e5a4bec470e6b4197e10b") + } + shards: + { "_id" : "shard0", "host" : "shard0/mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-2.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-3.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", "state" : 1 } + { "_id" : "shard1", "host" : "shard1/mg-sharding-shard1-0.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-1.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-2.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-3.mg-sharding-shard1-pods.demo.svc.cluster.local:27017", "state" : 1 } + { "_id" : "shard2", "host" : "shard2/mg-sharding-shard2-0.mg-sharding-shard2-pods.demo.svc.cluster.local:27017,mg-sharding-shard2-1.mg-sharding-shard2-pods.demo.svc.cluster.local:27017,mg-sharding-shard2-2.mg-sharding-shard2-pods.demo.svc.cluster.local:27017,mg-sharding-shard2-3.mg-sharding-shard2-pods.demo.svc.cluster.local:27017", "state" : 1 } + active mongoses: + "4.4.26" : 3 + autosplit: + Currently enabled: yes + balancer: + Currently enabled: yes + Currently running: no + Failed balancer rounds in last 5 attempts: 2 + Last reported error: Couldn't get a connection within the time limit + Time of Reported error: Tue Mar 02 2021 16:17:53 GMT+0000 (UTC) + Migration Results for the last 24 hours: + No recent migrations + databases: + { "_id" : "config", "primary" : "config", "partitioned" : true } + config.system.sessions + shard key: { "_id" : 1 } + unique: false + balancing: true + chunks: + shard0 1 + { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard0 Timestamp(1, 0) +``` + +From all the above outputs we can see that the replicas of the mongos is `3`. That means we have successfully scaled up the replicas of the RabbitMQ mongos replicas. + + +So, we have successfully scaled up all the components of the RabbitMQ database. + +### Scale Down + +Here, we are going to scale down both the shard and their replicas to meet the desired number of replicas after scaling. + +#### Create RabbitMQOpsRequest + +In order to scale down, we have to create a `RabbitMQOpsRequest` CR with our configuration. Below is the YAML of the `RabbitMQOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-hscale-down-shard + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: mg-sharding + horizontalScaling: + shard: + shards: 2 + replicas: 3 + mongos: + replicas: 2 + configServer: + replicas: 3 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing horizontal scaling operation on `mops-hscale-down-shard` database. +- `spec.type` specifies that we are performing `HorizontalScaling` on our database. +- `spec.horizontalScaling.shard.shards` specifies the desired number of shards after scaling. +- `spec.horizontalScaling.shard.replicas` specifies the desired number of replicas of each shard after scaling. +- `spec.horizontalScaling.configServer.replicas` specifies the desired replicas after scaling. +- `spec.horizontalScaling.mongos.replicas` specifies the desired replicas after scaling. + +> **Note:** If you don't want to scale all the components together, you can only specify the components (shard, configServer and mongos) that you want to scale. + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/scaling/horizontal-scaling/mops-hscale-down-shard.yaml +RabbitMQopsrequest.ops.kubedb.com/mops-hscale-down-shard created +``` + +#### Verify scaling down is successful + +If everything goes well, `KubeDB` Ops-manager operator will update the shards and replicas `RabbitMQ` object and related `StatefulSets` and `Pods`. + +Let's wait for `RabbitMQOpsRequest` to be `Successful`. Run the following command to watch `RabbitMQOpsRequest` CR, + +```bash +$ watch kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-hscale-down-shard HorizontalScaling Successful 81s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to scale down the the database. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-hscale-down-shard +Name: mops-hscale-down-shard +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2021-03-02T16:41:11Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:databaseRef: + .: + f:name: + f:horizontalScaling: + .: + f:configServer: + .: + f:replicas: + f:mongos: + .: + f:replicas: + f:shard: + .: + f:replicas: + f:shards: + f:type: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2021-03-02T16:41:11Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-enterprise + Operation: Update + Time: 2021-03-02T16:41:11Z + Resource Version: 149077 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/RabbitMQopsrequests/mops-hscale-down-shard + UID: 0f83c457-9498-4144-a397-226141851751 +Spec: + Database Ref: + Name: mg-sharding + Horizontal Scaling: + Config Server: + Replicas: 3 + Mongos: + Replicas: 2 + Shard: + Replicas: 3 + Shards: 2 + Type: HorizontalScaling +Status: + Conditions: + Last Transition Time: 2021-03-02T16:41:11Z + Message: RabbitMQ ops request is horizontally scaling database + Observed Generation: 1 + Reason: HorizontalScaling + Status: True + Type: HorizontalScaling + Last Transition Time: 2021-03-02T16:42:11Z + Message: Successfully Horizontally Scaled Down Shard Replicas + Observed Generation: 1 + Reason: ScaleDownShardReplicas + Status: True + Type: ScaleDownShardReplicas + Last Transition Time: 2021-03-02T16:42:12Z + Message: Successfully started RabbitMQ load balancer + Observed Generation: 1 + Reason: StartingBalancer + Status: True + Type: StartingBalancer + Last Transition Time: 2021-03-02T16:43:03Z + Message: Successfully Horizontally Scaled Down Shard + Observed Generation: 1 + Reason: ScaleDownShard + Status: True + Type: ScaleDownShard + Last Transition Time: 2021-03-02T16:43:24Z + Message: Successfully Horizontally Scaled Down ConfigServer + Observed Generation: 1 + Reason: ScaleDownConfigServer + Status: True + Type: ScaleDownConfigServer + Last Transition Time: 2021-03-02T16:43:34Z + Message: Successfully Horizontally Scaled Mongos + Observed Generation: 1 + Reason: ScaleMongos + Status: True + Type: ScaleMongos + Last Transition Time: 2021-03-02T16:43:34Z + Message: Successfully Horizontally Scaled RabbitMQ + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 6m29s KubeDB Ops-manager operator Pausing RabbitMQ demo/mg-sharding + Normal PauseDatabase 6m29s KubeDB Ops-manager operator Successfully paused RabbitMQ demo/mg-sharding + Normal ScaleDownShardReplicas 5m29s KubeDB Ops-manager operator Successfully Horizontally Scaled Down Shard Replicas + Normal StartingBalancer 5m29s KubeDB Ops-manager operator Starting Balancer + Normal StartingBalancer 5m28s KubeDB Ops-manager operator Successfully Started Balancer + Normal ScaleDownShard 4m37s KubeDB Ops-manager operator Successfully Horizontally Scaled Down Shard + Normal ScaleDownConfigServer 4m16s KubeDB Ops-manager operator Successfully Horizontally Scaled Down ConfigServer + Normal ScaleMongos 4m6s KubeDB Ops-manager operator Successfully Horizontally Scaled Mongos + Normal ResumeDatabase 4m6s KubeDB Ops-manager operator Resuming RabbitMQ demo/mg-sharding + Normal ResumeDatabase 4m6s KubeDB Ops-manager operator Successfully resumed RabbitMQ demo/mg-sharding + Normal Successful 4m6s KubeDB Ops-manager operator Successfully Horizontally Scaled Database +``` + +##### Verify Number of Shard and Shard Replicas + +Now, we are going to verify the number of shards this database has from the RabbitMQ object, number of statefulsets it has, + +```bash +$ kubectl get RabbitMQ -n demo mg-sharding -o json | jq '.spec.shardTopology.shard.shards' +2 + +$ kubectl get sts -n demo +NAME READY AGE +mg-sharding-configsvr 3/3 77m +mg-sharding-mongos 2/2 75m +mg-sharding-shard0 3/3 77m +mg-sharding-shard1 3/3 77m +``` + +Now let's connect to a mongos instance and run a RabbitMQ internal command to check the number of shards, +```bash +$ kubectl exec -n demo mg-sharding-mongos-0 -- mongo admin -u root -p xBC-EwMFivFCgUlK --eval "sh.status()" --quiet +--- Sharding Status --- + sharding version: { + "_id" : 1, + "minCompatibleVersion" : 5, + "currentVersion" : 6, + "clusterId" : ObjectId("603e5a4bec470e6b4197e10b") + } + shards: + { "_id" : "shard0", "host" : "shard0/mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-2.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", "state" : 1 } + { "_id" : "shard1", "host" : "shard1/mg-sharding-shard1-0.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-1.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-2.mg-sharding-shard1-pods.demo.svc.cluster.local:27017", "state" : 1 } + active mongoses: + "4.4.26" : 2 + autosplit: + Currently enabled: yes + balancer: + Currently enabled: yes + Currently running: no + Failed balancer rounds in last 5 attempts: 2 + Last reported error: Couldn't get a connection within the time limit + Time of Reported error: Tue Mar 02 2021 16:17:53 GMT+0000 (UTC) + Migration Results for the last 24 hours: + No recent migrations + databases: + { "_id" : "config", "primary" : "config", "partitioned" : true } + config.system.sessions + shard key: { "_id" : 1 } + unique: false + balancing: true + chunks: + shard0 1 + { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard0 Timestamp(1, 0) +``` + +From all the above outputs we can see that the number of shards are `2`. + +Now, we are going to verify the number of replicas each shard has from the RabbitMQ object, number of pods the statefulset have, + +```bash +$ kubectl get RabbitMQ -n demo mg-sharding -o json | jq '.spec.shardTopology.shard.replicas' +3 + +$ kubectl get sts -n demo mg-sharding-shard0 -o json | jq '.spec.replicas' +3 +``` + +Now let's connect to a shard instance and run a RabbitMQ internal command to check the number of replicas, +```bash +$ kubectl exec -n demo mg-sharding-shard0-0 -- mongo admin -u root -p xBC-EwMFivFCgUlK --eval "db.adminCommand( { replSetGetStatus : 1 } ).members" --quiet +[ + { + "_id" : 0, + "name" : "mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 2096, + "optime" : { + "ts" : Timestamp(1614703771, 1), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:49:31Z"), + "syncingTo" : "mg-sharding-shard0-2.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-shard0-2.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 2, + "infoMessage" : "", + "configVersion" : 5, + "self" : true, + "lastHeartbeatMessage" : "" + }, + { + "_id" : 1, + "name" : "mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 1, + "stateStr" : "PRIMARY", + "uptime" : 2065, + "optime" : { + "ts" : Timestamp(1614703771, 1), + "t" : NumberLong(2) + }, + "optimeDurable" : { + "ts" : Timestamp(1614703771, 1), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:49:31Z"), + "optimeDurableDate" : ISODate("2021-03-02T16:49:31Z"), + "lastHeartbeat" : ISODate("2021-03-02T16:49:39.092Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T16:49:40.074Z"), + "pingMs" : NumberLong(18), + "lastHeartbeatMessage" : "", + "syncingTo" : "", + "syncSourceHost" : "", + "syncSourceId" : -1, + "infoMessage" : "", + "electionTime" : Timestamp(1614701678, 2), + "electionDate" : ISODate("2021-03-02T16:14:38Z"), + "configVersion" : 5 + }, + { + "_id" : 2, + "name" : "mg-sharding-shard0-2.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 2065, + "optime" : { + "ts" : Timestamp(1614703771, 1), + "t" : NumberLong(2) + }, + "optimeDurable" : { + "ts" : Timestamp(1614703771, 1), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:49:31Z"), + "optimeDurableDate" : ISODate("2021-03-02T16:49:31Z"), + "lastHeartbeat" : ISODate("2021-03-02T16:49:38.712Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T16:49:39.885Z"), + "pingMs" : NumberLong(4), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 1, + "infoMessage" : "", + "configVersion" : 5 + } +] +``` + +From all the above outputs we can see that the replicas of each shard has is `3`. + +##### Verify Number of ConfigServer Replicas + +Now, we are going to verify the number of replicas this database has from the RabbitMQ object, number of pods the statefulset have, + +```bash +$ kubectl get RabbitMQ -n demo mg-sharding -o json | jq '.spec.shardTopology.configServer.replicas' +3 + +$ kubectl get sts -n demo mg-sharding-configsvr -o json | jq '.spec.replicas' +3 +``` + +Now let's connect to a RabbitMQ instance and run a RabbitMQ internal command to check the number of replicas, +```bash +$ kubectl exec -n demo mg-sharding-configsvr-0 -- mongo admin -u root -p xBC-EwMFivFCgUlK --eval "db.adminCommand( { replSetGetStatus : 1 } ).members" --quiet +[ + { + "_id" : 0, + "name" : "mg-sharding-configsvr-0.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 2345, + "optime" : { + "ts" : Timestamp(1614703841, 1), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:50:41Z"), + "syncingTo" : "mg-sharding-configsvr-1.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-configsvr-1.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 1, + "infoMessage" : "", + "configVersion" : 5, + "self" : true, + "lastHeartbeatMessage" : "" + }, + { + "_id" : 1, + "name" : "mg-sharding-configsvr-1.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 1, + "stateStr" : "PRIMARY", + "uptime" : 2329, + "optime" : { + "ts" : Timestamp(1614703841, 1), + "t" : NumberLong(2) + }, + "optimeDurable" : { + "ts" : Timestamp(1614703841, 1), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:50:41Z"), + "optimeDurableDate" : ISODate("2021-03-02T16:50:41Z"), + "lastHeartbeat" : ISODate("2021-03-02T16:50:45.874Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T16:50:44.194Z"), + "pingMs" : NumberLong(0), + "lastHeartbeatMessage" : "", + "syncingTo" : "", + "syncSourceHost" : "", + "syncSourceId" : -1, + "infoMessage" : "", + "electionTime" : Timestamp(1614701497, 2), + "electionDate" : ISODate("2021-03-02T16:11:37Z"), + "configVersion" : 5 + }, + { + "_id" : 2, + "name" : "mg-sharding-configsvr-2.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "health" : 1, + "state" : 2, + "stateStr" : "SECONDARY", + "uptime" : 2329, + "optime" : { + "ts" : Timestamp(1614703841, 1), + "t" : NumberLong(2) + }, + "optimeDurable" : { + "ts" : Timestamp(1614703841, 1), + "t" : NumberLong(2) + }, + "optimeDate" : ISODate("2021-03-02T16:50:41Z"), + "optimeDurableDate" : ISODate("2021-03-02T16:50:41Z"), + "lastHeartbeat" : ISODate("2021-03-02T16:50:45.778Z"), + "lastHeartbeatRecv" : ISODate("2021-03-02T16:50:46.091Z"), + "pingMs" : NumberLong(1), + "lastHeartbeatMessage" : "", + "syncingTo" : "mg-sharding-configsvr-1.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceHost" : "mg-sharding-configsvr-1.mg-sharding-configsvr-pods.demo.svc.cluster.local:27017", + "syncSourceId" : 1, + "infoMessage" : "", + "configVersion" : 5 + } +] +``` + +From all the above outputs we can see that the replicas of the configServer is `3`. That means we have successfully scaled down the replicas of the RabbitMQ configServer replicas. + +##### Verify Number of Mongos Replicas + +Now, we are going to verify the number of replicas this database has from the RabbitMQ object, number of pods the statefulset have, + +```bash +$ kubectl get RabbitMQ -n demo mg-sharding -o json | jq '.spec.shardTopology.mongos.replicas' +2 + +$ kubectl get sts -n demo mg-sharding-mongos -o json | jq '.spec.replicas' +2 +``` + +Now let's connect to a RabbitMQ instance and run a RabbitMQ internal command to check the number of replicas, +```bash +$ kubectl exec -n demo mg-sharding-mongos-0 -- mongo admin -u root -p xBC-EwMFivFCgUlK --eval "sh.status()" --quiet +--- Sharding Status --- + sharding version: { + "_id" : 1, + "minCompatibleVersion" : 5, + "currentVersion" : 6, + "clusterId" : ObjectId("603e5a4bec470e6b4197e10b") + } + shards: + { "_id" : "shard0", "host" : "shard0/mg-sharding-shard0-0.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-1.mg-sharding-shard0-pods.demo.svc.cluster.local:27017,mg-sharding-shard0-2.mg-sharding-shard0-pods.demo.svc.cluster.local:27017", "state" : 1 } + { "_id" : "shard1", "host" : "shard1/mg-sharding-shard1-0.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-1.mg-sharding-shard1-pods.demo.svc.cluster.local:27017,mg-sharding-shard1-2.mg-sharding-shard1-pods.demo.svc.cluster.local:27017", "state" : 1 } + active mongoses: + "4.4.26" : 2 + autosplit: + Currently enabled: yes + balancer: + Currently enabled: yes + Currently running: no + Failed balancer rounds in last 5 attempts: 2 + Last reported error: Couldn't get a connection within the time limit + Time of Reported error: Tue Mar 02 2021 16:17:53 GMT+0000 (UTC) + Migration Results for the last 24 hours: + No recent migrations + databases: + { "_id" : "config", "primary" : "config", "partitioned" : true } + config.system.sessions + shard key: { "_id" : 1 } + unique: false + balancing: true + chunks: + shard0 1 + { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard0 Timestamp(1, 0) +``` + +From all the above outputs we can see that the replicas of the mongos is `2`. That means we have successfully scaled down the replicas of the RabbitMQ mongos replicas. + +So, we have successfully scaled down all the components of the RabbitMQ database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-sharding +kubectl delete RabbitMQopsrequest -n demo mops-vscale-up-shard mops-vscale-down-shard +``` \ No newline at end of file diff --git a/docs/guides/rabbitmq/scaling/vertical-scaling/_index.md b/docs/guides/rabbitmq/scaling/vertical-scaling/_index.md new file mode 100644 index 0000000000..b14609e8a3 --- /dev/null +++ b/docs/guides/rabbitmq/scaling/vertical-scaling/_index.md @@ -0,0 +1,10 @@ +--- +title: Vertical Scaling +menu: + docs_{{ .version }}: + identifier: mg-vertical-scaling + name: Vertical Scaling + parent: mg-scaling + weight: 20 +menu_name: docs_{{ .version }} +--- \ No newline at end of file diff --git a/docs/guides/rabbitmq/scaling/vertical-scaling/overview.md b/docs/guides/rabbitmq/scaling/vertical-scaling/overview.md new file mode 100644 index 0000000000..4f137b7fe3 --- /dev/null +++ b/docs/guides/rabbitmq/scaling/vertical-scaling/overview.md @@ -0,0 +1,54 @@ +--- +title: RabbitMQ Vertical Scaling Overview +menu: + docs_{{ .version }}: + identifier: mg-vertical-scaling-overview + name: Overview + parent: mg-vertical-scaling + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# RabbitMQ Vertical Scaling + +This guide will give an overview on how KubeDB Ops-manager operator updates the resources(for example CPU and Memory etc.) of the `RabbitMQ` database. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + +## How Vertical Scaling Process Works + +The following diagram shows how KubeDB Ops-manager operator updates the resources of the `RabbitMQ` database. Open the image in a new tab to see the enlarged version. + +
+  Vertical scaling process of RabbitMQ +
Fig: Vertical scaling process of RabbitMQ
+
+ +The vertical scaling process consists of the following steps: + +1. At first, a user creates a `RabbitMQ` Custom Resource (CR). + +2. `KubeDB` Provisioner operator watches the `RabbitMQ` CR. + +3. When the operator finds a `RabbitMQ` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to update the resources(for example `CPU`, `Memory` etc.) of the `RabbitMQ` database the user creates a `RabbitMQOpsRequest` CR with desired information. + +5. `KubeDB` Ops-manager operator watches the `RabbitMQOpsRequest` CR. + +6. When it finds a `RabbitMQOpsRequest` CR, it halts the `RabbitMQ` object which is referred from the `RabbitMQOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `RabbitMQ` object during the vertical scaling process. + +7. Then the `KubeDB` Ops-manager operator will update resources of the StatefulSet Pods to reach desired state. + +8. After the successful update of the resources of the StatefulSet's replica, the `KubeDB` Ops-manager operator updates the `RabbitMQ` object to reflect the updated state. + +9. After the successful update of the `RabbitMQ` resources, the `KubeDB` Ops-manager operator resumes the `RabbitMQ` object so that the `KubeDB` Provisioner operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on updating resources of RabbitMQ database using `RabbitMQOpsRequest` CRD. \ No newline at end of file diff --git a/docs/guides/rabbitmq/scaling/vertical-scaling/replicaset.md b/docs/guides/rabbitmq/scaling/vertical-scaling/replicaset.md new file mode 100644 index 0000000000..ee49e0e59a --- /dev/null +++ b/docs/guides/rabbitmq/scaling/vertical-scaling/replicaset.md @@ -0,0 +1,310 @@ +--- +title: Vertical Scaling RabbitMQ Replicaset +menu: + docs_{{ .version }}: + identifier: mg-vertical-scaling-replicaset + name: Replicaset + parent: mg-vertical-scaling + weight: 30 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Vertical Scale RabbitMQ Replicaset + +This guide will show you how to use `KubeDB` Ops-manager operator to update the resources of a RabbitMQ replicaset database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [Replicaset](/docs/guides/RabbitMQ/clustering/replicaset.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + - [Vertical Scaling Overview](/docs/guides/RabbitMQ/scaling/vertical-scaling/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](/docs/examples/RabbitMQ) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Apply Vertical Scaling on Replicaset + +Here, we are going to deploy a `RabbitMQ` replicaset using a supported version by `KubeDB` operator. Then we are going to apply vertical scaling on it. + +### Prepare RabbitMQ Replicaset Database + +Now, we are going to deploy a `RabbitMQ` replicaset database with version `4.4.26`. + +### Deploy RabbitMQ replicaset + +In this section, we are going to deploy a RabbitMQ replicaset database. Then, in the next section we will update the resources of the database using `RabbitMQOpsRequest` CRD. Below is the YAML of the `RabbitMQ` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mg-replicaset + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "replicaset" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the `RabbitMQ` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/scaling/mg-replicaset.yaml +RabbitMQ.kubedb.com/mg-replicaset created +``` + +Now, wait until `mg-replicaset` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-replicaset 4.4.26 Ready 3m46s +``` + +Let's check the Pod containers resources, + +```bash +$ kubectl get pod -n demo mg-replicaset-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } +} +``` + +You can see the Pod has the default resources which is assigned by Kubedb operator. + +We are now ready to apply the `RabbitMQOpsRequest` CR to update the resources of this database. + +### Vertical Scaling + +Here, we are going to update the resources of the replicaset database to meet the desired resources after scaling. + +#### Create RabbitMQOpsRequest + +In order to update the resources of the database, we have to create a `RabbitMQOpsRequest` CR with our desired resources. Below is the YAML of the `RabbitMQOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-vscale-replicaset + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: mg-replicaset + verticalScaling: + replicaSet: + resources: + requests: + memory: "1.2Gi" + cpu: "0.6" + limits: + memory: "1.2Gi" + cpu: "0.6" + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `mops-vscale-replicaset` database. +- `spec.type` specifies that we are performing `VerticalScaling` on our database. +- `spec.VerticalScaling.replicaSet` specifies the desired resources after scaling. +- `spec.VerticalScaling.arbiter` could also be specified in similar fashion to get the desired resources for arbiter pod. +- Have a look [here](/docs/guides/RabbitMQ/concepts/opsrequest.md#specreadinesscriteria) on the respective sections to understand the `readinessCriteria`, `timeout` & `apply` fields. + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/scaling/vertical-scaling/mops-vscale-replicaset.yaml +RabbitMQopsrequest.ops.kubedb.com/mops-vscale-replicaset created +``` + +#### Verify RabbitMQ Replicaset resources updated successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the resources of `RabbitMQ` object and related `StatefulSets` and `Pods`. + +Let's wait for `RabbitMQOpsRequest` to be `Successful`. Run the following command to watch `RabbitMQOpsRequest` CR, + +```bash +$ kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-vscale-replicaset VerticalScaling Successful 3m56s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to scale the database. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-vscale-replicaset +Name: mops-vscale-replicaset +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2022-10-26T10:41:56Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:databaseRef: + f:readinessCriteria: + .: + f:objectsCountDiffPercentage: + f:oplogMaxLagSeconds: + f:timeout: + f:type: + f:verticalScaling: + .: + f:replicaSet: + .: + f:limits: + .: + f:cpu: + f:memory: + f:requests: + .: + f:cpu: + f:memory: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-10-26T10:41:56Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-10-26T10:44:33Z + Resource Version: 611468 + UID: 474053a7-90a8-49fd-9b27-c9bf7b4660e7 +Spec: + Apply: IfReady + Database Ref: + Name: mg-replicaset + Readiness Criteria: + Objects Count Diff Percentage: 10 + Oplog Max Lag Seconds: 20 + Timeout: 5m + Type: VerticalScaling + Vertical Scaling: + Replica Set: + Limits: + Cpu: 0.6 + Memory: 1.2Gi + Requests: + Cpu: 0.6 + Memory: 1.2Gi +Status: + Conditions: + Last Transition Time: 2022-10-26T10:43:21Z + Message: RabbitMQ ops request is vertically scaling database + Observed Generation: 1 + Reason: VerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2022-10-26T10:44:33Z + Message: Successfully Vertically Scaled Replicaset Resources + Observed Generation: 1 + Reason: UpdateReplicaSetResources + Status: True + Type: UpdateReplicaSetResources + Last Transition Time: 2022-10-26T10:44:33Z + Message: Successfully Vertically Scaled Database + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 82s KubeDB Ops-manager Operator Pausing RabbitMQ demo/mg-replicaset + Normal PauseDatabase 82s KubeDB Ops-manager Operator Successfully paused RabbitMQ demo/mg-replicaset + Normal Starting 82s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-replicaset + Normal UpdateReplicaSetResources 82s KubeDB Ops-manager Operator Successfully updated replicaset Resources + Normal Starting 82s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-replicaset + Normal UpdateReplicaSetResources 82s KubeDB Ops-manager Operator Successfully updated replicaset Resources + Normal UpdateReplicaSetResources 10s KubeDB Ops-manager Operator Successfully Vertically Scaled Replicaset Resources + Normal ResumeDatabase 10s KubeDB Ops-manager Operator Resuming RabbitMQ demo/mg-replicaset + Normal ResumeDatabase 10s KubeDB Ops-manager Operator Successfully resumed RabbitMQ demo/mg-replicaset + Normal Successful 10s KubeDB Ops-manager Operator Successfully Vertically Scaled Database + +``` + +Now, we are going to verify from one of the Pod yaml whether the resources of the replicaset database has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo mg-replicaset-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "600m", + "memory": "1288490188800m" + }, + "requests": { + "cpu": "600m", + "memory": "1288490188800m" + } +} +``` + +The above output verifies that we have successfully scaled up the resources of the RabbitMQ replicaset database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-replicaset +kubectl delete RabbitMQopsrequest -n demo mops-vscale-replicaset +``` \ No newline at end of file diff --git a/docs/guides/rabbitmq/scaling/vertical-scaling/sharding.md b/docs/guides/rabbitmq/scaling/vertical-scaling/sharding.md new file mode 100644 index 0000000000..c5c421d786 --- /dev/null +++ b/docs/guides/rabbitmq/scaling/vertical-scaling/sharding.md @@ -0,0 +1,438 @@ +--- +title: Vertical Scaling Sharded RabbitMQ Cluster +menu: + docs_{{ .version }}: + identifier: mg-vertical-scaling-shard + name: Sharding + parent: mg-vertical-scaling + weight: 40 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Vertical Scale RabbitMQ Replicaset + +This guide will show you how to use `KubeDB` Ops-manager operator to update the resources of a RabbitMQ replicaset database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [Replicaset](/docs/guides/RabbitMQ/clustering/replicaset.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + - [Vertical Scaling Overview](/docs/guides/RabbitMQ/scaling/vertical-scaling/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](/docs/examples/RabbitMQ) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Apply Vertical Scaling on Sharded Database + +Here, we are going to deploy a `RabbitMQ` sharded database using a supported version by `KubeDB` operator. Then we are going to apply vertical scaling on it. + +### Prepare RabbitMQ Sharded Database + +Now, we are going to deploy a `RabbitMQ` sharded database with version `4.4.26`. + +### Deploy RabbitMQ Sharded Database + +In this section, we are going to deploy a RabbitMQ sharded database. Then, in the next sections we will update the resources of various components (mongos, shard, configserver etc.) of the database using `RabbitMQOpsRequest` CRD. Below is the YAML of the `RabbitMQ` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mg-sharding + namespace: demo +spec: + version: 4.4.26 + shardTopology: + configServer: + replicas: 3 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + replicas: 2 + shard: + replicas: 3 + shards: 2 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard +``` + +Let's create the `RabbitMQ` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/scaling/mg-shard.yaml +RabbitMQ.kubedb.com/mg-sharding created +``` + +Now, wait until `mg-sharding` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-sharding 4.4.26 Ready 8m51s +``` + +Let's check the Pod containers resources of various components (mongos, shard, configserver etc.) of the database, + +```bash +$ kubectl get pod -n demo mg-sharding-mongos-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } +} + +$ kubectl get pod -n demo mg-sharding-configsvr-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } +} + +$ kubectl get pod -n demo mg-sharding-shard0-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } +} +``` + +You can see all the Pod of mongos, configserver and shard has default resources which is assigned by Kubedb operator. + +We are now ready to apply the `RabbitMQOpsRequest` CR to update the resources of mongos, configserver and shard nodes of this database. + +## Vertical Scaling of Shard + +Here, we are going to update the resources of the shard of the database to meet the desired resources after scaling. + +#### Create RabbitMQOpsRequest for shard + +In order to update the resources of the shard nodes, we have to create a `RabbitMQOpsRequest` CR with our desired resources. Below is the YAML of the `RabbitMQOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-vscale-shard + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: mg-sharding + verticalScaling: + shard: + resources: + requests: + memory: "1100Mi" + cpu: "0.55" + limits: + memory: "1100Mi" + cpu: "0.55" + configServer: + resources: + requests: + memory: "1100Mi" + cpu: "0.55" + limits: + memory: "1100Mi" + cpu: "0.55" + mongos: + resources: + requests: + memory: "1100Mi" + cpu: "0.55" + limits: + memory: "1100Mi" + cpu: "0.55" + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `mops-vscale-shard` database. +- `spec.type` specifies that we are performing `VerticalScaling` on our database. +- `spec.VerticalScaling.shard` specifies the desired resources after scaling for the shard nodes. +- `spec.VerticalScaling.configServer` specifies the desired resources after scaling for the configServer nodes. +- `spec.VerticalScaling.mongos` specifies the desired resources after scaling for the mongos nodes. +- `spec.VerticalScaling.arbiter` could also be specified in similar fashion to get the desired resources for arbiter pod. +- Have a look [here](/docs/guides/RabbitMQ/concepts/opsrequest.md#specreadinesscriteria) on the respective sections to understand the `readinessCriteria`, `timeout` & `apply` fields. + +> **Note:** If you don't want to scale all the components together, you can only specify the components (shard, configServer and mongos) that you want to scale. + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/scaling/vertical-scaling/mops-vscale-shard.yaml +RabbitMQopsrequest.ops.kubedb.com/mops-vscale-shard created +``` + +#### Verify RabbitMQ Shard resources updated successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the resources of `RabbitMQ` object and related `StatefulSets` and `Pods` of shard nodes. + +Let's wait for `RabbitMQOpsRequest` to be `Successful`. Run the following command to watch `RabbitMQOpsRequest` CR, + +```bash +$ kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-vscale-shard VerticalScaling Successful 8m21s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to scale the database. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-vscale-shard +Name: mops-vscale-shard +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2022-10-26T10:45:56Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:databaseRef: + f:readinessCriteria: + .: + f:objectsCountDiffPercentage: + f:oplogMaxLagSeconds: + f:timeout: + f:type: + f:verticalScaling: + .: + f:configServer: + .: + f:limits: + .: + f:cpu: + f:memory: + f:requests: + .: + f:cpu: + f:memory: + f:mongos: + .: + f:limits: + .: + f:cpu: + f:memory: + f:requests: + .: + f:cpu: + f:memory: + f:shard: + .: + f:limits: + .: + f:cpu: + f:memory: + f:requests: + .: + f:cpu: + f:memory: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-10-26T10:45:56Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-10-26T10:52:28Z + Resource Version: 613274 + UID: a186cc72-3629-4034-bbf8-988839f6ec23 +Spec: + Apply: IfReady + Database Ref: + Name: mg-sharding + Readiness Criteria: + Objects Count Diff Percentage: 10 + Oplog Max Lag Seconds: 20 + Timeout: 5m + Type: VerticalScaling + Vertical Scaling: + Config Server: + Limits: + Cpu: 0.55 + Memory: 1100Mi + Requests: + Cpu: 0.55 + Memory: 1100Mi + Mongos: + Limits: + Cpu: 0.55 + Memory: 1100Mi + Requests: + Cpu: 0.55 + Memory: 1100Mi + Shard: + Limits: + Cpu: 0.55 + Memory: 1100Mi + Requests: + Cpu: 0.55 + Memory: 1100Mi +Status: + Conditions: + Last Transition Time: 2022-10-26T10:48:06Z + Message: RabbitMQ ops request is vertically scaling database + Observed Generation: 1 + Reason: VerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2022-10-26T10:49:37Z + Message: Successfully Vertically Scaled ConfigServer Resources + Observed Generation: 1 + Reason: UpdateConfigServerResources + Status: True + Type: UpdateConfigServerResources + Last Transition Time: 2022-10-26T10:50:07Z + Message: Successfully Vertically Scaled Mongos Resources + Observed Generation: 1 + Reason: UpdateMongosResources + Status: True + Type: UpdateMongosResources + Last Transition Time: 2022-10-26T10:52:28Z + Message: Successfully Vertically Scaled Shard Resources + Observed Generation: 1 + Reason: UpdateShardResources + Status: True + Type: UpdateShardResources + Last Transition Time: 2022-10-26T10:52:28Z + Message: Successfully Vertically Scaled Database + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 4m51s KubeDB Ops-manager Operator Successfully paused RabbitMQ demo/mg-sharding + Normal Starting 4m51s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-sharding-configsvr + Normal UpdateConfigServerResources 4m51s KubeDB Ops-manager Operator Successfully updated configServer Resources + Normal Starting 4m51s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-sharding-configsvr + Normal UpdateConfigServerResources 4m51s KubeDB Ops-manager Operator Successfully updated configServer Resources + Normal PauseDatabase 4m51s KubeDB Ops-manager Operator Pausing RabbitMQ demo/mg-sharding + Normal UpdateConfigServerResources 3m20s KubeDB Ops-manager Operator Successfully Vertically Scaled ConfigServer Resources + Normal Starting 3m20s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-sharding-mongos + Normal UpdateMongosResources 3m20s KubeDB Ops-manager Operator Successfully updated Mongos Resources + Normal UpdateShardResources 2m50s KubeDB Ops-manager Operator Successfully updated Shard Resources + Normal Starting 2m50s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-sharding-shard0 + Normal Starting 2m50s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-sharding-shard1 + Normal UpdateMongosResources 2m50s KubeDB Ops-manager Operator Successfully Vertically Scaled Mongos Resources + Normal UpdateShardResources 29s KubeDB Ops-manager Operator Successfully Vertically Scaled Shard Resources + Normal ResumeDatabase 29s KubeDB Ops-manager Operator Resuming RabbitMQ demo/mg-sharding + Normal ResumeDatabase 29s KubeDB Ops-manager Operator Successfully resumed RabbitMQ demo/mg-sharding + Normal Successful 29s KubeDB Ops-manager Operator Successfully Vertically Scaled Database + Normal UpdateShardResources 28s KubeDB Ops-manager Operator Successfully Vertically Scaled Shard Resources +``` + +Now, we are going to verify from one of the Pod yaml whether the resources of the shard nodes has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo mg-sharding-shard0-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "550m", + "memory": "1100Mi" + }, + "requests": { + "cpu": "550m", + "memory": "1100Mi" + } +} + +$ kubectl get pod -n demo mg-sharding-configsvr-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "550m", + "memory": "1100Mi" + }, + "requests": { + "cpu": "550m", + "memory": "1100Mi" + } +} + +$ kubectl get pod -n demo mg-sharding-mongos-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "550m", + "memory": "1100Mi" + }, + "requests": { + "cpu": "550m", + "memory": "1100Mi" + } +} +``` + +The above output verifies that we have successfully scaled the resources of all components of the RabbitMQ sharded database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-shard +kubectl delete RabbitMQopsrequest -n demo mops-vscale-shard +``` \ No newline at end of file diff --git a/docs/guides/rabbitmq/scaling/vertical-scaling/standalone.md b/docs/guides/rabbitmq/scaling/vertical-scaling/standalone.md new file mode 100644 index 0000000000..c0c0e063c6 --- /dev/null +++ b/docs/guides/rabbitmq/scaling/vertical-scaling/standalone.md @@ -0,0 +1,306 @@ +--- +title: Vertical Scaling Standalone RabbitMQ +menu: + docs_{{ .version }}: + identifier: mg-vertical-scaling-standalone + name: Standalone + parent: mg-vertical-scaling + weight: 20 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Vertical Scale RabbitMQ Standalone + +This guide will show you how to use `KubeDB` Ops-manager operator to update the resources of a RabbitMQ standalone database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + - [Vertical Scaling Overview](/docs/guides/RabbitMQ/scaling/vertical-scaling/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](/docs/examples/RabbitMQ) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Apply Vertical Scaling on Standalone + +Here, we are going to deploy a `RabbitMQ` standalone using a supported version by `KubeDB` operator. Then we are going to apply vertical scaling on it. + +### Prepare RabbitMQ Standalone Database + +Now, we are going to deploy a `RabbitMQ` standalone database with version `4.4.26`. + +### Deploy RabbitMQ standalone + +In this section, we are going to deploy a RabbitMQ standalone database. Then, in the next section we will update the resources of the database using `RabbitMQOpsRequest` CRD. Below is the YAML of the `RabbitMQ` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mg-standalone + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the `RabbitMQ` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/scaling/mg-standalone.yaml +RabbitMQ.kubedb.com/mg-standalone created +``` + +Now, wait until `mg-standalone` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-standalone 4.4.26 Ready 5m56s +``` + +Let's check the Pod containers resources, + +```bash +$ kubectl get pod -n demo mg-standalone-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } +} +``` + +You can see the Pod has default resources which is assigned by the Kubedb operator. + +We are now ready to apply the `RabbitMQOpsRequest` CR to update the resources of this database. + +### Vertical Scaling + +Here, we are going to update the resources of the standalone database to meet the desired resources after scaling. + +#### Create RabbitMQOpsRequest + +In order to update the resources of the database, we have to create a `RabbitMQOpsRequest` CR with our desired resources. Below is the YAML of the `RabbitMQOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-vscale-standalone + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: mg-standalone + verticalScaling: + standalone: + resources: + requests: + memory: "2Gi" + cpu: "1" + limits: + memory: "2Gi" + cpu: "1" + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `mops-vscale-standalone` database. +- `spec.type` specifies that we are performing `VerticalScaling` on our database. +- `spec.VerticalScaling.standalone` specifies the desired resources after scaling. +- Have a look [here](/docs/guides/RabbitMQ/concepts/opsrequest.md#specreadinesscriteria) on the respective sections to understand the `readinessCriteria`, `timeout` & `apply` fields. + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/scaling/vertical-scaling/mops-vscale-standalone.yaml +RabbitMQopsrequest.ops.kubedb.com/mops-vscale-standalone created +``` + +#### Verify RabbitMQ Standalone resources updated successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the resources of `RabbitMQ` object and related `StatefulSets` and `Pods`. + +Let's wait for `RabbitMQOpsRequest` to be `Successful`. Run the following command to watch `RabbitMQOpsRequest` CR, + +```bash +$ kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-vscale-standalone VerticalScaling Successful 108s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to scale the database. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-vscale-standalone +Name: mops-vscale-standalone +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2022-10-26T10:54:01Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:databaseRef: + f:readinessCriteria: + .: + f:objectsCountDiffPercentage: + f:oplogMaxLagSeconds: + f:timeout: + f:type: + f:verticalScaling: + .: + f:standalone: + .: + f:limits: + .: + f:cpu: + f:memory: + f:requests: + .: + f:cpu: + f:memory: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-10-26T10:54:01Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-10-26T10:54:52Z + Resource Version: 613933 + UID: c3bf9c3d-cf96-49ae-877f-a895e0b1d280 +Spec: + Apply: IfReady + Database Ref: + Name: mg-standalone + Readiness Criteria: + Objects Count Diff Percentage: 10 + Oplog Max Lag Seconds: 20 + Timeout: 5m + Type: VerticalScaling + Vertical Scaling: + Standalone: + Limits: + Cpu: 1 + Memory: 2Gi + Requests: + Cpu: 1 + Memory: 2Gi +Status: + Conditions: + Last Transition Time: 2022-10-26T10:54:21Z + Message: RabbitMQ ops request is vertically scaling database + Observed Generation: 1 + Reason: VerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2022-10-26T10:54:51Z + Message: Successfully Vertically Scaled Standalone Resources + Observed Generation: 1 + Reason: UpdateStandaloneResources + Status: True + Type: UpdateStandaloneResources + Last Transition Time: 2022-10-26T10:54:52Z + Message: Successfully Vertically Scaled Database + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 34s KubeDB Ops-manager Operator Pausing RabbitMQ demo/mg-standalone + Normal PauseDatabase 34s KubeDB Ops-manager Operator Successfully paused RabbitMQ demo/mg-standalone + Normal Starting 34s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-standalone + Normal UpdateStandaloneResources 34s KubeDB Ops-manager Operator Successfully updated standalone Resources + Normal Starting 34s KubeDB Ops-manager Operator Updating Resources of StatefulSet: mg-standalone + Normal UpdateStandaloneResources 34s KubeDB Ops-manager Operator Successfully updated standalone Resources + Normal UpdateStandaloneResources 4s KubeDB Ops-manager Operator Successfully Vertically Scaled Standalone Resources + Normal UpdateStandaloneResources 4s KubeDB Ops-manager Operator Successfully Vertically Scaled Standalone Resources + Normal ResumeDatabase 4s KubeDB Ops-manager Operator Resuming RabbitMQ demo/mg-standalone + Normal ResumeDatabase 3s KubeDB Ops-manager Operator Successfully resumed RabbitMQ demo/mg-standalone + Normal Successful 3s KubeDB Ops-manager Operator Successfully Vertically Scaled Database + +``` + +Now, we are going to verify from the Pod yaml whether the resources of the standalone database has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo mg-standalone-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "1", + "memory": "2Gi" + }, + "requests": { + "cpu": "1", + "memory": "2Gi" + } +} +``` + +The above output verifies that we have successfully scaled up the resources of the RabbitMQ standalone database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-standalone +kubectl delete RabbitMQopsrequest -n demo mops-vscale-standalone +``` \ No newline at end of file diff --git a/docs/guides/rabbitmq/tls/_index.md b/docs/guides/rabbitmq/tls/_index.md new file mode 100755 index 0000000000..c4cd263b5b --- /dev/null +++ b/docs/guides/rabbitmq/tls/_index.md @@ -0,0 +1,10 @@ +--- +title: Run RabbitMQ with TLS +menu: + docs_{{ .version }}: + identifier: mg-tls + name: TLS/SSL Encryption + parent: mg-RabbitMQ-guides + weight: 45 +menu_name: docs_{{ .version }} +--- diff --git a/docs/guides/rabbitmq/tls/overview.md b/docs/guides/rabbitmq/tls/overview.md new file mode 100644 index 0000000000..d14677c2d7 --- /dev/null +++ b/docs/guides/rabbitmq/tls/overview.md @@ -0,0 +1,70 @@ +--- +title: RabbitMQ TLS/SSL Encryption Overview +menu: + docs_{{ .version }}: + identifier: mg-tls-overview + name: Overview + parent: mg-tls + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# RabbitMQ TLS/SSL Encryption + +**Prerequisite :** To configure TLS/SSL in `RabbitMQ`, `KubeDB` uses `cert-manager` to issue certificates. So first you have to make sure that the cluster has `cert-manager` installed. To install `cert-manager` in your cluster following steps [here](https://cert-manager.io/docs/installation/kubernetes/). + +To issue a certificate, the following crd of `cert-manager` is used: + +- `Issuer/ClusterIssuer`: Issuers, and ClusterIssuers represent certificate authorities (CAs) that are able to generate signed certificates by honoring certificate signing requests. All cert-manager certificates require a referenced issuer that is in a ready condition to attempt to honor the request. You can learn more details [here](https://cert-manager.io/docs/concepts/issuer/). + +- `Certificate`: `cert-manager` has the concept of Certificates that define a desired x509 certificate which will be renewed and kept up to date. You can learn more details [here](https://cert-manager.io/docs/concepts/certificate/). + +**RabbitMQ CRD Specification :** + +KubeDB uses following crd fields to enable SSL/TLS encryption in `RabbitMQ`. + +- `spec:` + - `sslMode` + - `tls:` + - `issuerRef` + - `certificates` + - `clusterAuthMode` +Read about the fields in details from [RabbitMQ concept](/docs/guides/RabbitMQ/concepts/RabbitMQ.md), + +When, `sslMode` is set to `requireSSL`, the users must specify the `tls.issuerRef` field. `KubeDB` uses the `issuer` or `clusterIssuer` referenced in the `tls.issuerRef` field, and the certificate specs provided in `tls.certificate` to generate certificate secrets using `Issuer/ClusterIssuers` specification. These certificates secrets including `ca.crt`, `tls.crt` and `tls.key` etc. are used to configure `RabbitMQ` server, exporter etc. respectively. + +## How TLS/SSL configures in RabbitMQ + +The following figure shows how `KubeDB` enterprise used to configure TLS/SSL in RabbitMQ. Open the image in a new tab to see the enlarged version. + +
+Deploy RabbitMQ with TLS/SSL +
Fig: Deploy RabbitMQ with TLS/SSL
+
+ +Deploying RabbitMQ with TLS/SSL configuration process consists of the following steps: + +1. At first, a user creates a `Issuer/ClusterIssuer` cr. + +2. Then the user creates a `RabbitMQ` cr which refers to the `Issuer/ClusterIssuer` cr that the user created in the previous step. + +3. `KubeDB` Provisioner operator watches for the `RabbitMQ` cr. + +4. When it finds one, it creates `Secret`, `Service`, etc. for the `RabbitMQ` database. + +5. `KubeDB` Ops-manager operator watches for `RabbitMQ`(5c), `Issuer/ClusterIssuer`(5b), `Secret` and `Service`(5a). + +6. When it finds all the resources(`RabbitMQ`, `Issuer/ClusterIssuer`, `Secret`, `Service`), it creates `Certificates` by using `tls.issuerRef` and `tls.certificates` field specification from `RabbitMQ` cr. + +7. `cert-manager` watches for certificates. + +8. When it finds one, it creates certificate secrets `tls-secrets`(server, client, exporter secrets etc.) that holds the actual certificate signed by the CA. + +9. `KubeDB` Provisioner operator watches for the Certificate secrets `tls-secrets`. + +10. When it finds all the tls-secret, it creates the related `StatefulSets` so that RabbitMQ database can be configured with TLS/SSL. + +In the next doc, we are going to show a step by step guide on how to configure a `RabbitMQ` database with TLS/SSL. \ No newline at end of file diff --git a/docs/guides/rabbitmq/tls/replicaset.md b/docs/guides/rabbitmq/tls/replicaset.md new file mode 100644 index 0000000000..9dcfd1a483 --- /dev/null +++ b/docs/guides/rabbitmq/tls/replicaset.md @@ -0,0 +1,266 @@ +--- +title: RabbitMQ ReplicaSet TLS/SSL Encryption +menu: + docs_{{ .version }}: + identifier: mg-tls-replicaset + name: Replicaset + parent: mg-tls + weight: 30 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Run RabbitMQ with TLS/SSL (Transport Encryption) + +KubeDB supports providing TLS/SSL encryption (via, `sslMode` and `clusterAuthMode`) for RabbitMQ. This tutorial will show you how to use KubeDB to run a RabbitMQ database with TLS/SSL encryption. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.0.0 or later to your cluster to manage your SSL/TLS certificates. + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/RabbitMQ) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +KubeDB uses following crd fields to enable SSL/TLS encryption in RabbitMQ. + +- `spec:` + - `sslMode` + - `tls:` + - `issuerRef` + - `certificate` + - `clusterAuthMode` + +Read about the fields in details in [RabbitMQ concept](/docs/guides/RabbitMQ/concepts/RabbitMQ.md), + +`sslMode`, and `tls` is applicable for all types of RabbitMQ (i.e., `standalone`, `replicaset` and `sharding`), while `clusterAuthMode` provides [ClusterAuthMode](https://docs.RabbitMQ.com/manual/reference/program/mongod/#cmdoption-mongod-clusterauthmode) for RabbitMQ clusters (i.e., `replicaset` and `sharding`). + +When, SSLMode is anything other than `disabled`, users must specify the `tls.issuerRef` field. KubeDB uses the `issuer` or `clusterIssuer` referenced in the `tls.issuerRef` field, and the certificate specs provided in `tls.certificate` to generate certificate secrets. These certificate secrets are then used to generate required certificates including `ca.crt`, `mongo.pem` and `client.pem`. + +The subject of `client.pem` certificate is added as `root` user in `$external` RabbitMQ database. So, user can use this client certificate for `RabbitMQ-X509` `authenticationMechanism`. + +## Create Issuer/ ClusterIssuer + +We are going to create an example `Issuer` that will be used throughout the duration of this tutorial to enable SSL/TLS in RabbitMQ. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. + +- Start off by generating you ca certificates using openssl. + +```bash +openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=mongo/O=kubedb" +``` + +- Now create a ca-secret using the certificate files you have just generated. + +```bash +kubectl create secret tls mongo-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +``` + +Now, create an `Issuer` using the `ca-secret` you have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: mongo-ca-issuer + namespace: demo +spec: + ca: + secretName: mongo-ca +``` + +Apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/tls/issuer.yaml +issuer.cert-manager.io/mongo-ca-issuer created +``` + +## TLS/SSL encryption in RabbitMQ Replicaset + +Below is the YAML for RabbitMQ Replicaset. Here, [`spec.sslMode`](/docs/guides/RabbitMQ/concepts/RabbitMQ.md#specsslMode) specifies `sslMode` for `replicaset` (which is `requireSSL`) and [`spec.clusterAuthMode`](/docs/guides/RabbitMQ/concepts/RabbitMQ.md#specclusterAuthMode) provides `clusterAuthMode` for RabbitMQ replicaset nodes (which is `x509`). + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mgo-rs-tls + namespace: demo +spec: + version: "4.4.26" + sslMode: requireSSL + tls: + issuerRef: + apiGroup: "cert-manager.io" + kind: Issuer + name: mongo-ca-issuer + clusterAuthMode: x509 + replicas: 4 + replicaSet: + name: rs0 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +### Deploy RabbitMQ Replicaset + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/tls/mg-replicaset-ssl.yaml +RabbitMQ.kubedb.com/mgo-rs-tls created +``` + +Now, wait until `mgo-rs-tls created` has status `Ready`. i.e, + +```bash +$ watch kubectl get mg -n demo +Every 2.0s: kubectl get RabbitMQ -n demo +NAME VERSION STATUS AGE +mgo-rs-tls 4.4.26 Ready 4m10s +``` + +### Verify TLS/SSL in RabbitMQ Replicaset + +Now, connect to this database through [mongo-shell](https://docs.RabbitMQ.com/v4.0/mongo/) and verify if `SSLMode` and `ClusterAuthMode` has been set up as intended. + +```bash +$ kubectl describe secret -n demo mgo-rs-tls-client-cert +Name: mgo-rs-tls-client-cert +Namespace: demo +Labels: +Annotations: cert-manager.io/alt-names: + cert-manager.io/certificate-name: mgo-rs-tls-client-cert + cert-manager.io/common-name: root + cert-manager.io/ip-sans: + cert-manager.io/issuer-group: cert-manager.io + cert-manager.io/issuer-kind: Issuer + cert-manager.io/issuer-name: mongo-ca-issuer + cert-manager.io/uri-sans: + +Type: kubernetes.io/tls + +Data +==== +ca.crt: 1147 bytes +tls.crt: 1172 bytes +tls.key: 1679 bytes +``` + +Now, Let's exec into a RabbitMQ container and find out the username to connect in a mongo shell, + +```bash +$ kubectl exec -it mgo-rs-tls-0 -n demo bash +root@mgo-rs-tls-0:/$ ls /var/run/RabbitMQ/tls +ca.crt client.pem mongo.pem +root@mgo-rs-tls-0:/$ openssl x509 -in /var/run/RabbitMQ/tls/client.pem -inform PEM -subject -nameopt RFC2253 -noout +subject=CN=root,O=kubedb +``` + +Now, we can connect using `CN=root,O=kubedb` as root to connect to the mongo shell, + +```bash +root@mgo-rs-tls-0:/$ mongo --tls --tlsCAFile /var/run/RabbitMQ/tls/ca.crt --tlsCertificateKeyFile /var/run/RabbitMQ/tls/client.pem admin --host localhost --authenticationMechanism RabbitMQ-X509 --authenticationDatabase='$external' -u "CN=root,O=kubedb" --quiet +Welcome to the RabbitMQ shell. +For interactive help, type "help". +For more comprehensive documentation, see + http://docs.RabbitMQ.org/ +Questions? Try the support group + http://groups.google.com/group/RabbitMQ-user +rs0:PRIMARY> +``` + +We are connected to the mongo shell. Let's run some command to verify the sslMode and the user, + +```bash +rs0:PRIMARY> db.adminCommand({ getParameter:1, sslMode:1 }) +{ + "sslMode" : "requireSSL", + "ok" : 1, + "$clusterTime" : { + "clusterTime" : Timestamp(1599490676, 1), + "signature" : { + "hash" : BinData(0,"/wQ4pf4HVi1T7SOyaB3pXO56j64="), + "keyId" : NumberLong("6869759546676477954") + } + }, + "operationTime" : Timestamp(1599490676, 1) +} + +rs0:PRIMARY> use $external +switched to db $external + +rs0:PRIMARY> show users +{ + "_id" : "$external.CN=root,O=kubedb", + "userId" : UUID("9cebbcf4-74bf-47dd-a485-1604125058da"), + "user" : "CN=root,O=kubedb", + "db" : "$external", + "roles" : [ + { + "role" : "root", + "db" : "admin" + } + ], + "mechanisms" : [ + "external" + ] +} +> exit +bye +``` + +You can see here that, `sslMode` is set to `requireSSL` and a user is created in `$external` with name `"CN=root,O=kubedb"`. + +## Changing the SSLMode & ClusterAuthMode + +User can update `sslMode` & `ClusterAuthMode` if needed. Some changes may be invalid from RabbitMQ end, like using `sslMode: disabled` with `clusterAuthMode: x509`. + +The good thing is, **KubeDB operator will throw error for invalid SSL specs while creating/updating the RabbitMQ object.** i.e., + +```bash +$ kubectl patch -n demo mg/mgo-rs-tls -p '{"spec":{"sslMode": "disabled","clusterAuthMode": "x509"}}' --type="merge" +Error from server (Forbidden): admission webhook "RabbitMQ.validators.kubedb.com" denied the request: can't have disabled set to RabbitMQ.spec.sslMode when RabbitMQ.spec.clusterAuthMode is set to x509 +``` + +To **update from Keyfile Authentication to x.509 Authentication**, change the `sslMode` and `clusterAuthMode` in recommended sequence as suggested in [official documentation](https://docs.RabbitMQ.com/manual/tutorial/update-keyfile-to-x509/). Each time after changing the specs, follow the procedure that is described above to verify the changes of `sslMode` and `clusterAuthMode` inside the database. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete RabbitMQ -n demo mgo-rs-tls +kubectl delete issuer -n demo mongo-ca-issuer +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [RabbitMQ object](/docs/guides/RabbitMQ/concepts/RabbitMQ.md). +- [Backup and Restore](/docs/guides/RabbitMQ/backup/overview/index.md) RabbitMQ databases using Stash. +- Initialize [RabbitMQ with Script](/docs/guides/RabbitMQ/initialization/using-script.md). +- Monitor your RabbitMQ database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/RabbitMQ/monitoring/using-prometheus-operator.md). +- Monitor your RabbitMQ database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/RabbitMQ/monitoring/using-builtin-prometheus.md). +- Use [private Docker registry](/docs/guides/RabbitMQ/private-registry/using-private-registry.md) to deploy RabbitMQ with KubeDB. +- Use [kubedb cli](/docs/guides/RabbitMQ/cli/cli.md) to manage databases like kubectl for Kubernetes. +- Detail concepts of [RabbitMQ object](/docs/guides/RabbitMQ/concepts/RabbitMQ.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/rabbitmq/tls/sharding.md b/docs/guides/rabbitmq/tls/sharding.md new file mode 100644 index 0000000000..90cdcd082e --- /dev/null +++ b/docs/guides/rabbitmq/tls/sharding.md @@ -0,0 +1,274 @@ +--- +title: RabbitMQ Shard TLS/SSL Encryption +menu: + docs_{{ .version }}: + identifier: mg-tls-shard + name: Sharding + parent: mg-tls + weight: 40 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Run RabbitMQ with TLS/SSL (Transport Encryption) + +KubeDB supports providing TLS/SSL encryption (via, `sslMode` and `clusterAuthMode`) for RabbitMQ. This tutorial will show you how to use KubeDB to run a RabbitMQ database with TLS/SSL encryption. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.0.0 or later to your cluster to manage your SSL/TLS certificates. + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/RabbitMQ) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +KubeDB uses following crd fields to enable SSL/TLS encryption in RabbitMQ. + +- `spec:` + - `sslMode` + - `tls:` + - `issuerRef` + - `certificate` + - `clusterAuthMode` + +Read about the fields in details in [RabbitMQ concept](/docs/guides/RabbitMQ/concepts/RabbitMQ.md), + +`sslMode`, and `tls` is applicable for all types of RabbitMQ (i.e., `standalone`, `replicaset` and `sharding`), while `clusterAuthMode` provides [ClusterAuthMode](https://docs.RabbitMQ.com/manual/reference/program/mongod/#cmdoption-mongod-clusterauthmode) for RabbitMQ clusters (i.e., `replicaset` and `sharding`). + +When, SSLMode is anything other than `disabled`, users must specify the `tls.issuerRef` field. KubeDB uses the `issuer` or `clusterIssuer` referenced in the `tls.issuerRef` field, and the certificate specs provided in `tls.certificate` to generate certificate secrets. These certificate secrets are then used to generate required certificates including `ca.crt`, `mongo.pem` and `client.pem`. + +The subject of `client.pem` certificate is added as `root` user in `$external` RabbitMQ database. So, user can use this client certificate for `RabbitMQ-X509` `authenticationMechanism`. + +## Create Issuer/ ClusterIssuer + +We are going to create an example `Issuer` that will be used throughout the duration of this tutorial to enable SSL/TLS in RabbitMQ. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. + +- Start off by generating you ca certificates using openssl. + +```bash +openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=mongo/O=kubedb" +``` + +- Now create a ca-secret using the certificate files you have just generated. + +```bash +kubectl create secret tls mongo-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +``` + +Now, create an `Issuer` using the `ca-secret` you have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: mongo-ca-issuer + namespace: demo +spec: + ca: + secretName: mongo-ca +``` + +Apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/tls/issuer.yaml +issuer.cert-manager.io/mongo-ca-issuer created +``` + +## TLS/SSL encryption in RabbitMQ Sharding + +Below is the YAML for RabbitMQ Sharding. Here, [`spec.sslMode`](/docs/guides/RabbitMQ/concepts/RabbitMQ.md#specsslMode) specifies `sslMode` for `sharding` and [`spec.clusterAuthMode`](/docs/guides/RabbitMQ/concepts/RabbitMQ.md#specclusterAuthMode) provides `clusterAuthMode` for sharding servers. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mongo-sh-tls + namespace: demo +spec: + version: "4.4.26" + sslMode: requireSSL + tls: + issuerRef: + apiGroup: "cert-manager.io" + kind: Issuer + name: mongo-ca-issuer + clusterAuthMode: x509 + shardTopology: + configServer: + replicas: 2 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + replicas: 2 + shard: + replicas: 2 + shards: 2 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + terminationPolicy: WipeOut +``` + +### Deploy RabbitMQ Sharding + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/tls/mg-shard-ssl.yaml +RabbitMQ.kubedb.com/mongo-sh-tls created +``` + +Now, wait until `mongo-sh-tls created` has status `Ready`. ie, + +```bash +$ watch kubectl get mg -n demo +Every 2.0s: kubectl get RabbitMQ -n demo +NAME VERSION STATUS AGE +mongo-sh-tls 4.4.26 Ready 4m24s +``` + +### Verify TLS/SSL in RabbitMQ Sharding + +Now, connect to `mongos` component of this database through [mongo-shell](https://docs.RabbitMQ.com/v4.0/mongo/) and verify if `SSLMode` and `ClusterAuthMode` has been set up as intended. + +```bash +$ kubectl describe secret -n demo mongo-sh-tls-client-cert +Name: mongo-sh-tls-client-cert +Namespace: demo +Labels: +Annotations: cert-manager.io/alt-names: + cert-manager.io/certificate-name: mongo-sh-tls-client-cert + cert-manager.io/common-name: root + cert-manager.io/ip-sans: + cert-manager.io/issuer-group: cert-manager.io + cert-manager.io/issuer-kind: Issuer + cert-manager.io/issuer-name: mongo-ca-issuer + cert-manager.io/uri-sans: + +Type: kubernetes.io/tls + +Data +==== +ca.crt: 1147 bytes +tls.crt: 1172 bytes +tls.key: 1679 bytes +``` + +Now, Let's exec into a RabbitMQ container and find out the username to connect in a mongo shell, + +```bash +$ kubectl exec -it mongo-sh-tls-mongos-0 -n demo bash +root@mongo-sh-tls-mongos-0:/$ ls /var/run/RabbitMQ/tls +ca.crt client.pem mongo.pem +RabbitMQ@mgo-sh-tls-mongos-0:/$ openssl x509 -in /var/run/RabbitMQ/tls/client.pem -inform PEM -subject -nameopt RFC2253 -noout +subject=CN=root,O=kubedb +``` + +Now, we can connect using `CN=root,O=kubedb` as root to connect to the mongo shell, + +```bash +root@mongo-sh-tls-mongos-0:/# mongo --tls --tlsCAFile /var/run/RabbitMQ/tls/ca.crt --tlsCertificateKeyFile /var/run/RabbitMQ/tls/client.pem admin --host localhost --authenticationMechanism RabbitMQ-X509 --authenticationDatabase='$external' -u "CN=root,O=kubedb" --quiet +Welcome to the RabbitMQ shell. +For interactive help, type "help". +For more comprehensive documentation, see + http://docs.RabbitMQ.org/ +Questions? Try the support group + http://groups.google.com/group/RabbitMQ-user +mongos> +``` + +We are connected to the mongo shell. Let's run some command to verify the sslMode and the user, + +```bash +mongos> db.adminCommand({ getParameter:1, sslMode:1 }) +{ + "sslMode" : "requireSSL", + "ok" : 1, + "operationTime" : Timestamp(1599491398, 1), + "$clusterTime" : { + "clusterTime" : Timestamp(1599491398, 1), + "signature" : { + "hash" : BinData(0,"cn2Mhfy2blonon3jPz6Daen0nnc="), + "keyId" : NumberLong("6869760899591176209") + } + } +} +mongos> use $external +switched to db $external +mongos> show users +{ + "_id" : "$external.CN=root,O=kubedb", + "userId" : UUID("4865dda6-5e31-4b79-a085-7d6fea51c9be"), + "user" : "CN=root,O=kubedb", + "db" : "$external", + "roles" : [ + { + "role" : "root", + "db" : "admin" + } + ], + "mechanisms" : [ + "external" + ] +} +> exit +bye +``` + +You can see here that, `sslMode` is set to `requireSSL` and `clusterAuthMode` is set to `x509` and also an user is created in `$external` with name `"CN=root,O=kubedb"`. + +## Changing the SSLMode & ClusterAuthMode + +User can update `sslMode` & `ClusterAuthMode` if needed. Some changes may be invalid from RabbitMQ end, like using `sslMode: disabled` with `clusterAuthMode: x509`. + +The good thing is, **KubeDB operator will throw error for invalid SSL specs while creating/updating the RabbitMQ object.** i.e., + +```bash +$ kubectl patch -n demo mg/mgo-sh-tls -p '{"spec":{"sslMode": "disabled","clusterAuthMode": "x509"}}' --type="merge" +Error from server (Forbidden): admission webhook "RabbitMQ.validators.kubedb.com" denied the request: can't have disabled set to RabbitMQ.spec.sslMode when RabbitMQ.spec.clusterAuthMode is set to x509 +``` + +To **update from Keyfile Authentication to x.509 Authentication**, change the `sslMode` and `clusterAuthMode` in recommended sequence as suggested in [official documentation](https://docs.RabbitMQ.com/manual/tutorial/update-keyfile-to-x509/). Each time after changing the specs, follow the procedure that is described above to verify the changes of `sslMode` and `clusterAuthMode` inside the database. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete RabbitMQ -n demo mongo-sh-tls +kubectl delete issuer -n demo mongo-ca-issuer +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [RabbitMQ object](/docs/guides/RabbitMQ/concepts/RabbitMQ.md). +- [Backup and Restore](/docs/guides/RabbitMQ/backup/overview/index.md) RabbitMQ databases using Stash. +- Initialize [RabbitMQ with Script](/docs/guides/RabbitMQ/initialization/using-script.md). +- Monitor your RabbitMQ database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/RabbitMQ/monitoring/using-prometheus-operator.md). +- Monitor your RabbitMQ database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/RabbitMQ/monitoring/using-builtin-prometheus.md). +- Use [private Docker registry](/docs/guides/RabbitMQ/private-registry/using-private-registry.md) to deploy RabbitMQ with KubeDB. +- Use [kubedb cli](/docs/guides/RabbitMQ/cli/cli.md) to manage databases like kubectl for Kubernetes. +- Detail concepts of [RabbitMQ object](/docs/guides/RabbitMQ/concepts/RabbitMQ.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/rabbitmq/tls/standalone.md b/docs/guides/rabbitmq/tls/standalone.md new file mode 100644 index 0000000000..c23b6f48ef --- /dev/null +++ b/docs/guides/rabbitmq/tls/standalone.md @@ -0,0 +1,245 @@ +--- +title: RabbitMQ Standalone TLS/SSL Encryption +menu: + docs_{{ .version }}: + identifier: mg-tls-standalone + name: Standalone + parent: mg-tls + weight: 20 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Run RabbitMQ with TLS/SSL (Transport Encryption) + +KubeDB supports providing TLS/SSL encryption (via, `sslMode` and `clusterAuthMode`) for RabbitMQ. This tutorial will show you how to use KubeDB to run a RabbitMQ database with TLS/SSL encryption. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.0.0 or later to your cluster to manage your SSL/TLS certificates. + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/RabbitMQ) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +KubeDB uses following crd fields to enable SSL/TLS encryption in RabbitMQ. + +- `spec:` + - `sslMode` + - `tls:` + - `issuerRef` + - `certificate` + - `clusterAuthMode` + +Read about the fields in details in [RabbitMQ concept](/docs/guides/RabbitMQ/concepts/RabbitMQ.md), + +`sslMode`, and `tls` is applicable for all types of RabbitMQ (i.e., `standalone`, `replicaset` and `sharding`), while `clusterAuthMode` provides [ClusterAuthMode](https://docs.RabbitMQ.com/manual/reference/program/mongod/#cmdoption-mongod-clusterauthmode) for RabbitMQ clusters (i.e., `replicaset` and `sharding`). + +When, SSLMode is anything other than `disabled`, users must specify the `tls.issuerRef` field. KubeDB uses the `issuer` or `clusterIssuer` referenced in the `tls.issuerRef` field, and the certificate specs provided in `tls.certificate` to generate certificate secrets. These certificate secrets are then used to generate required certificates including `ca.crt`, `mongo.pem` and `client.pem`. + +The subject of `client.pem` certificate is added as `root` user in `$external` RabbitMQ database. So, user can use this client certificate for `RabbitMQ-X509` `authenticationMechanism`. + +## Create Issuer/ ClusterIssuer + +We are going to create an example `Issuer` that will be used throughout the duration of this tutorial to enable SSL/TLS in RabbitMQ. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. + +- Start off by generating you ca certificates using openssl. + +```bash +openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=mongo/O=kubedb" +``` + +- Now create a ca-secret using the certificate files you have just generated. + +```bash +kubectl create secret tls mongo-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +``` + +Now, create an `Issuer` using the `ca-secret` you have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: mongo-ca-issuer + namespace: demo +spec: + ca: + secretName: mongo-ca +``` + +Apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/tls/issuer.yaml +issuer.cert-manager.io/mongo-ca-issuer created +``` + +## TLS/SSL encryption in RabbitMQ Standalone + +Below is the YAML for RabbitMQ Standalone. Here, [`spec.sslMode`](/docs/guides/RabbitMQ/concepts/RabbitMQ.md#specsslMode) specifies `sslMode` for `standalone` (which is `requireSSL`). + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mgo-tls + namespace: demo +spec: + version: "4.4.26" + sslMode: requireSSL + tls: + issuerRef: + apiGroup: "cert-manager.io" + kind: Issuer + name: mongo-ca-issuer + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +### Deploy RabbitMQ Standalone + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/tls/mg-standalone-ssl.yaml +RabbitMQ.kubedb.com/mgo-tls created +``` + +Now, wait until `mgo-tls created` has status `Ready`. i.e, + +```bash +$ watch kubectl get mg -n demo +Every 2.0s: kubectl get RabbitMQ -n demo +NAME VERSION STATUS AGE +mgo-tls 4.4.26 Ready 14s +``` + +### Verify TLS/SSL in RabbitMQ Standalone + +Now, connect to this database through [mongo-shell](https://docs.RabbitMQ.com/v4.0/mongo/) and verify if `SSLMode` has been set up as intended (i.e, `requireSSL`). + +```bash +$ kubectl describe secret -n demo mgo-tls-client-cert +Name: mgo-tls-client-cert +Namespace: demo +Labels: +Annotations: cert-manager.io/alt-names: + cert-manager.io/certificate-name: mgo-tls-client-cert + cert-manager.io/common-name: root + cert-manager.io/ip-sans: + cert-manager.io/issuer-group: cert-manager.io + cert-manager.io/issuer-kind: Issuer + cert-manager.io/issuer-name: mongo-ca-issuer + cert-manager.io/uri-sans: + +Type: kubernetes.io/tls + +Data +==== +tls.crt: 1172 bytes +tls.key: 1679 bytes +ca.crt: 1147 bytes +``` + +Now, Let's exec into a RabbitMQ container and find out the username to connect in a mongo shell, + +```bash +$ kubectl exec -it mgo-tls-0 -n demo bash +RabbitMQ@mgo-tls-0:/$ ls /var/run/RabbitMQ/tls +ca.crt client.pem mongo.pem +RabbitMQ@mgo-tls-0:/$ openssl x509 -in /var/run/RabbitMQ/tls/client.pem -inform PEM -subject -nameopt RFC2253 -noout +subject=CN=root,O=kubedb +``` + +Now, we can connect using `CN=root,O=kubedb` as root to connect to the mongo shell, + +```bash +RabbitMQ@mgo-tls-0:/$ mongo --tls --tlsCAFile /var/run/RabbitMQ/tls/ca.crt --tlsCertificateKeyFile /var/run/RabbitMQ/tls/client.pem admin --host localhost --authenticationMechanism RabbitMQ-X509 --authenticationDatabase='$external' -u "CN=root,O=kubedb" --quiet +> +``` + +We are connected to the mongo shell. Let's run some command to verify the sslMode and the user, + +```bash +> db.adminCommand({ getParameter:1, sslMode:1 }) +{ "sslMode" : "requireSSL", "ok" : 1 } + +> use $external +switched to db $external + +> show users +{ + "_id" : "$external.CN=root,O=kubedb", + "userId" : UUID("d2ddf121-9398-400b-b477-0e8bcdd47746"), + "user" : "CN=root,O=kubedb", + "db" : "$external", + "roles" : [ + { + "role" : "root", + "db" : "admin" + } + ], + "mechanisms" : [ + "external" + ] +} +> exit +bye +``` + +You can see here that, `sslMode` is set to `requireSSL` and a user is created in `$external` with name `"CN=root,O=kubedb"`. + +## Changing the SSLMode & ClusterAuthMode + +User can update `sslMode` & `ClusterAuthMode` if needed. Some changes may be invalid from RabbitMQ end, like using `sslMode: disabled` with `clusterAuthMode: x509`. + +The good thing is, **KubeDB operator will throw error for invalid SSL specs while creating/updating the RabbitMQ object.** i.e., + +```bash +$ kubectl patch -n demo mg/mgo-tls -p '{"spec":{"sslMode": "disabled","clusterAuthMode": "x509"}}' --type="merge" +Error from server (Forbidden): admission webhook "RabbitMQ.validators.kubedb.com" denied the request: can't have disabled set to RabbitMQ.spec.sslMode when RabbitMQ.spec.clusterAuthMode is set to x509 +``` + +To **update from Keyfile Authentication to x.509 Authentication**, change the `sslMode` and `clusterAuthMode` in recommended sequence as suggested in [official documentation](https://docs.RabbitMQ.com/manual/tutorial/update-keyfile-to-x509/). Each time after changing the specs, follow the procedure that is described above to verify the changes of `sslMode` and `clusterAuthMode` inside the database. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete RabbitMQ -n demo mgo-tls +kubectl delete issuer -n demo mongo-ca-issuer +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [RabbitMQ object](/docs/guides/RabbitMQ/concepts/RabbitMQ.md). +- [Backup and Restore](/docs/guides/RabbitMQ/backup/overview/index.md) RabbitMQ databases using Stash. +- Initialize [RabbitMQ with Script](/docs/guides/RabbitMQ/initialization/using-script.md). +- Monitor your RabbitMQ database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/RabbitMQ/monitoring/using-prometheus-operator.md). +- Monitor your RabbitMQ database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/RabbitMQ/monitoring/using-builtin-prometheus.md). +- Use [private Docker registry](/docs/guides/RabbitMQ/private-registry/using-private-registry.md) to deploy RabbitMQ with KubeDB. +- Use [kubedb cli](/docs/guides/RabbitMQ/cli/cli.md) to manage databases like kubectl for Kubernetes. +- Detail concepts of [RabbitMQ object](/docs/guides/RabbitMQ/concepts/RabbitMQ.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/rabbitmq/update-version/_index.md b/docs/guides/rabbitmq/update-version/_index.md new file mode 100644 index 0000000000..01c1abbe9d --- /dev/null +++ b/docs/guides/rabbitmq/update-version/_index.md @@ -0,0 +1,10 @@ +--- +title: Updating RabbitMQ +menu: + docs_{{ .version }}: + identifier: mg-updating + name: UpdateVersion + parent: mg-RabbitMQ-guides + weight: 42 +menu_name: docs_{{ .version }} +--- \ No newline at end of file diff --git a/docs/guides/rabbitmq/update-version/overview.md b/docs/guides/rabbitmq/update-version/overview.md new file mode 100644 index 0000000000..f82f62fc91 --- /dev/null +++ b/docs/guides/rabbitmq/update-version/overview.md @@ -0,0 +1,54 @@ +--- +title: Updating RabbitMQ Overview +menu: + docs_{{ .version }}: + identifier: mg-updating-overview + name: Overview + parent: mg-updating + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# updating RabbitMQ version Overview + +This guide will give you an overview on how KubeDB Ops-manager operator update the version of `RabbitMQ` database. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + +## How update version Process Works + +The following diagram shows how KubeDB Ops-manager operator used to update the version of `RabbitMQ`. Open the image in a new tab to see the enlarged version. + +
+  updating Process of RabbitMQ +
Fig: updating Process of RabbitMQ
+
+ +The updating process consists of the following steps: + +1. At first, a user creates a `RabbitMQ` Custom Resource (CR). + +2. `KubeDB` Provisioner operator watches the `RabbitMQ` CR. + +3. When the operator finds a `RabbitMQ` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to update the version of the `RabbitMQ` database the user creates a `RabbitMQOpsRequest` CR with the desired version. + +5. `KubeDB` Ops-manager operator watches the `RabbitMQOpsRequest` CR. + +6. When it finds a `RabbitMQOpsRequest` CR, it halts the `RabbitMQ` object which is referred from the `RabbitMQOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `RabbitMQ` object during the updating process. + +7. By looking at the target version from `RabbitMQOpsRequest` CR, `KubeDB` Ops-manager operator updates the images of all the `StatefulSets`. After each image update, the operator performs some checks such as if the oplog is synced and database size is almost same or not. + +8. After successfully updating the `StatefulSets` and their `Pods` images, the `KubeDB` Ops-manager operator updates the image of the `RabbitMQ` object to reflect the updated state of the database. + +9. After successfully updating of `RabbitMQ` object, the `KubeDB` Ops-manager operator resumes the `RabbitMQ` object so that the `KubeDB` Provisioner operator can resume its usual operations. + +In the next doc, we are going to show a step by step guide on updating of a RabbitMQ database using updateVersion operation. \ No newline at end of file diff --git a/docs/guides/rabbitmq/update-version/replicaset.md b/docs/guides/rabbitmq/update-version/replicaset.md new file mode 100644 index 0000000000..a034d81556 --- /dev/null +++ b/docs/guides/rabbitmq/update-version/replicaset.md @@ -0,0 +1,265 @@ +--- +title: Updating RabbitMQ Replicaset +menu: + docs_{{ .version }}: + identifier: mg-updating-replicaset + name: ReplicaSet + parent: mg-updating + weight: 30 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# update version of RabbitMQ ReplicaSet + +This guide will show you how to use `KubeDB` Ops-manager operator to update the version of `RabbitMQ` ReplicaSet. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [Replicaset](/docs/guides/RabbitMQ/clustering/replicaset.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + - [Updating Overview](/docs/guides/RabbitMQ/update-version/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](/docs/examples/RabbitMQ) directory of [kubedb/docs](https://github.com/kube/docs) repository. + +## Prepare RabbitMQ ReplicaSet Database + +Now, we are going to deploy a `RabbitMQ` replicaset database with version `3.6.8`. + +### Deploy RabbitMQ replicaset + +In this section, we are going to deploy a RabbitMQ replicaset database. Then, in the next section we will update the version of the database using `RabbitMQOpsRequest` CRD. Below is the YAML of the `RabbitMQ` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mg-replicaset + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "replicaset" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the `RabbitMQ` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/update-version/mg-replicaset.yaml +RabbitMQ.kubedb.com/mg-replicaset created +``` + +Now, wait until `mg-replicaset` created has status `Ready`. i.e, + +```bash +$ k get RabbitMQ -n demo +NAME VERSION STATUS AGE +mg-replicaset 4.4.26 Ready 109s +``` + +We are now ready to apply the `RabbitMQOpsRequest` CR to update this database. + +### update RabbitMQ Version + +Here, we are going to update `RabbitMQ` replicaset from `3.6.8` to `4.0.5`. + +#### Create RabbitMQOpsRequest: + +In order to update the version of the replicaset database, we have to create a `RabbitMQOpsRequest` CR with your desired version that is supported by `KubeDB`. Below is the YAML of the `RabbitMQOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-replicaset-update + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: mg-replicaset + updateVersion: + targetVersion: 4.4.26 + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing operation on `mg-replicaset` RabbitMQ database. +- `spec.type` specifies that we are going to perform `UpdateVersion` on our database. +- `spec.updateVersion.targetVersion` specifies the expected version of the database `4.0.5`. +- Have a look [here](/docs/guides/RabbitMQ/concepts/opsrequest.md#specreadinesscriteria) on the respective sections to understand the `readinessCriteria`, `timeout` & `apply` fields. + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/update-version/mops-update-replicaset .yaml +RabbitMQopsrequest.ops.kubedb.com/mops-replicaset-update created +``` + +#### Verify RabbitMQ version updated successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the image of `RabbitMQ` object and related `StatefulSets` and `Pods`. + +Let's wait for `RabbitMQOpsRequest` to be `Successful`. Run the following command to watch `RabbitMQOpsRequest` CR, + +```bash +$ kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-replicaset-update UpdateVersion Successful 84s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to update the database version. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-replicaset-update +Name: mops-replicaset-update +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2022-10-26T10:19:55Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:databaseRef: + f:readinessCriteria: + .: + f:objectsCountDiffPercentage: + f:oplogMaxLagSeconds: + f:timeout: + f:type: + f:updateVersion: + .: + f:targetVersion: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-10-26T10:19:55Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-10-26T10:23:09Z + Resource Version: 607814 + UID: 38053605-47bd-4d94-9f53-ce9474ad0a98 +Spec: + Apply: IfReady + Database Ref: + Name: mg-replicaset + Readiness Criteria: + Objects Count Diff Percentage: 10 + Oplog Max Lag Seconds: 20 + Timeout: 5m + Type: UpdateVersion + UpdateVersion: + Target Version: 4.4.26 +Status: + Conditions: + Last Transition Time: 2022-10-26T10:21:20Z + Message: RabbitMQ ops request is update-version database version + Observed Generation: 1 + Reason: UpdateVersion + Status: True + Type: UpdateVersion + Last Transition Time: 2022-10-26T10:21:39Z + Message: Successfully updated statefulsets update strategy type + Observed Generation: 1 + Reason: UpdateStatefulSets + Status: True + Type: UpdateStatefulSets + Last Transition Time: 2022-10-26T10:23:09Z + Message: Successfully Updated Standalone Image + Observed Generation: 1 + Reason: UpdateStandaloneImage + Status: True + Type: UpdateStandaloneImage + Last Transition Time: 2022-10-26T10:23:09Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 2m27s KubeDB Ops-manager Operator Pausing RabbitMQ demo/mg-replicaset + Normal PauseDatabase 2m27s KubeDB Ops-manager Operator Successfully paused RabbitMQ demo/mg-replicaset + Normal Updating 2m27s KubeDB Ops-manager Operator Updating StatefulSets + Normal Updating 2m8s KubeDB Ops-manager Operator Successfully Updated StatefulSets + Normal UpdateStandaloneImage 38s KubeDB Ops-manager Operator Successfully Updated Standalone Image + Normal ResumeDatabase 38s KubeDB Ops-manager Operator Resuming RabbitMQ demo/mg-replicaset + Normal ResumeDatabase 38s KubeDB Ops-manager Operator Successfully resumed RabbitMQ demo/mg-replicaset + Normal Successful 38s KubeDB Ops-manager Operator Successfully Updated Database +``` + +Now, we are going to verify whether the `RabbitMQ` and the related `StatefulSets` and their `Pods` have the new version image. Let's check, + +```bash +$ kubectl get mg -n demo mg-replicaset -o=jsonpath='{.spec.version}{"\n"}' +4.4.26 + +$ kubectl get sts -n demo mg-replicaset -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}' +mongo:4.0.5 + +$ kubectl get pods -n demo mg-replicaset-0 -o=jsonpath='{.spec.containers[0].image}{"\n"}' +mongo:4.0.5 +``` + +You can see from above, our `RabbitMQ` replicaset database has been updated with the new version. So, the updateVersion process is successfully completed. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-replicaset +kubectl delete RabbitMQopsrequest -n demo mops-replicaset-update +``` \ No newline at end of file diff --git a/docs/guides/rabbitmq/update-version/sharding.md b/docs/guides/rabbitmq/update-version/sharding.md new file mode 100644 index 0000000000..61c9251342 --- /dev/null +++ b/docs/guides/rabbitmq/update-version/sharding.md @@ -0,0 +1,317 @@ +--- +title: Updating RabbitMQ Sharded Database +menu: + docs_{{ .version }}: + identifier: mg-updating-sharding + name: Sharding + parent: mg-updating + weight: 40 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Update version of RabbitMQ Sharded Database + +This guide will show you how to use `KubeDB` Ops-manager operator to update the version of `RabbitMQ` Sharded Database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [Sharding](/docs/guides/RabbitMQ/clustering/sharding.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + - [Updating Overview](/docs/guides/RabbitMQ/update-version/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](/docs/examples/RabbitMQ) directory of [kubedb/docs](https://github.com/kube/docs) repository. + +## Prepare RabbitMQ Sharded Database Database + +Now, we are going to deploy a `RabbitMQ` sharded database with version `3.6.8`. + +### Deploy RabbitMQ Sharded Database + +In this section, we are going to deploy a RabbitMQ sharded database. Then, in the next section we will update the version of the database using `RabbitMQOpsRequest` CRD. Below is the YAML of the `RabbitMQ` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mg-sharding + namespace: demo +spec: + version: 4.4.26 + shardTopology: + configServer: + replicas: 2 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + replicas: 2 + shard: + replicas: 2 + shards: 3 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard +``` + +Let's create the `RabbitMQ` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/update-version/mg-shard.yaml +RabbitMQ.kubedb.com/mg-sharding created +``` + +Now, wait until `mg-sharding` created has status `Ready`. i.e, + +```bash +$ k get RabbitMQ -n demo +NAME VERSION STATUS AGE +mg-sharding 4.4.26 Ready 2m9s +``` + +We are now ready to apply the `RabbitMQOpsRequest` CR to update this database. + +### Update RabbitMQ Version + +Here, we are going to update `RabbitMQ` sharded database from `3.6.8` to `4.0.5`. + +#### Create RabbitMQOpsRequest + +In order to update the sharded database, we have to create a `RabbitMQOpsRequest` CR with your desired version that is supported by `KubeDB`. Below is the YAML of the `RabbitMQOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-shard-update + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: mg-sharding + updateVersion: + targetVersion: 4.4.26 + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing operation on `mg-sharding` RabbitMQ database. +- `spec.type` specifies that we are going to perform `UpdateVersion` on our database. +- `spec.updateVersion.targetVersion` specifies the expected version of the database `4.0.5`. +- Have a look [here](/docs/guides/RabbitMQ/concepts/opsrequest.md#specreadinesscriteria) on the respective sections to understand the `readinessCriteria`, `timeout` & `apply` fields. + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/update-version/mops-update-shard.yaml +RabbitMQopsrequest.ops.kubedb.com/mops-shard-update created +``` + +#### Verify RabbitMQ version updated successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the image of `RabbitMQ` object and related `StatefulSets` and `Pods`. + +Let's wait for `RabbitMQOpsRequest` to be `Successful`. Run the following command to watch `RabbitMQOpsRequest` CR, + +```bash +$ kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-shard-update UpdateVersion Successful 2m31s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to update the database. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-shard-update + +Name: mops-shard-update +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2022-10-26T10:27:24Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:databaseRef: + f:readinessCriteria: + .: + f:objectsCountDiffPercentage: + f:oplogMaxLagSeconds: + f:timeout: + f:type: + f:updateVersion: + .: + f:targetVersion: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-10-26T10:27:24Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-10-26T10:36:12Z + Resource Version: 610193 + UID: 6459a314-c759-4002-9dff-106b836c4db0 +Spec: + Apply: IfReady + Database Ref: + Name: mg-sharding + Readiness Criteria: + Objects Count Diff Percentage: 10 + Oplog Max Lag Seconds: 20 + Timeout: 5m + Type: UpdateVersion + UpdateVersion: + Target Version: 4.4.26 +Status: + Conditions: + Last Transition Time: 2022-10-26T10:36:12Z + Message: connection() error occurred during connection handshake: dial tcp 10.244.0.125:27017: i/o timeout + Observed Generation: 1 + Reason: Failed + Status: False + Type: UpdateVersion + Last Transition Time: 2022-10-26T10:29:29Z + Message: Successfully stopped RabbitMQ load balancer + Observed Generation: 1 + Reason: StoppingBalancer + Status: True + Type: StoppingBalancer + Last Transition Time: 2022-10-26T10:30:54Z + Message: Successfully updated statefulsets update strategy type + Observed Generation: 1 + Reason: UpdateStatefulSets + Status: True + Type: UpdateStatefulSets + Last Transition Time: 2022-10-26T10:32:00Z + Message: Successfully Updated ConfigServer Image + Observed Generation: 1 + Reason: UpdateConfigServerImage + Status: True + Type: UpdateConfigServerImage + Last Transition Time: 2022-10-26T10:35:32Z + Message: Successfully Updated Shard Image + Observed Generation: 1 + Reason: UpdateShardImage + Status: True + Type: UpdateShardImage + Last Transition Time: 2022-10-26T10:36:07Z + Message: Successfully Updated Mongos Image + Observed Generation: 1 + Reason: UpdateMongosImage + Status: True + Type: UpdateMongosImage + Last Transition Time: 2022-10-26T10:36:07Z + Message: Successfully Started RabbitMQ load balancer + Observed Generation: 1 + Reason: StartingBalancer + Status: True + Type: StartingBalancer + Last Transition Time: 2022-10-26T10:36:07Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Failed +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 8m27s KubeDB Ops-manager Operator Pausing RabbitMQ demo/mg-sharding + Normal PauseDatabase 8m27s KubeDB Ops-manager Operator Successfully paused RabbitMQ demo/mg-sharding + Normal StoppingBalancer 8m27s KubeDB Ops-manager Operator Stopping Balancer + Normal StoppingBalancer 8m27s KubeDB Ops-manager Operator Successfully Stopped Balancer + Normal Updating 8m27s KubeDB Ops-manager Operator Updating StatefulSets + Normal Updating 7m2s KubeDB Ops-manager Operator Successfully Updated StatefulSets + Normal Updating 7m2s KubeDB Ops-manager Operator Updating StatefulSets + Normal UpdateConfigServerImage 5m56s KubeDB Ops-manager Operator Successfully Updated ConfigServer Image + Normal Updating 5m45s KubeDB Ops-manager Operator Successfully Updated StatefulSets + Normal UpdateShardImage 2m24s KubeDB Ops-manager Operator Successfully Updated Shard Image + Normal UpdateMongosImage 109s KubeDB Ops-manager Operator Successfully Updated Mongos Image + Normal Updating 109s KubeDB Ops-manager Operator Starting Balancer + Normal StartingBalancer 109s KubeDB Ops-manager Operator Successfully Started Balancer + Normal ResumeDatabase 109s KubeDB Ops-manager Operator Resuming RabbitMQ demo/mg-sharding + Normal ResumeDatabase 109s KubeDB Ops-manager Operator Successfully resumed RabbitMQ demo/mg-sharding + Normal Successful 109s KubeDB Ops-manager Operator Successfully Updated Database +``` + +Now, we are going to verify whether the `RabbitMQ` and the related `StatefulSets` of `Mongos`, `Shard` and `ConfigeServer` and their `Pods` have the new version image. Let's check, + +```bash +$ kubectl get mg -n demo mg-sharding -o=jsonpath='{.spec.version}{"\n"}' +4.4.26 + +$ kubectl get sts -n demo mg-sharding-configsvr -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}' +mongo:4.0.5 + +$ kubectl get sts -n demo mg-sharding-shard0 -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}' +mongo:4.0.5 + +$ kubectl get sts -n demo mg-sharding-mongos -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}' +mongo:4.0.5 + +$ kubectl get pods -n demo mg-sharding-configsvr-0 -o=jsonpath='{.spec.containers[0].image}{"\n"}' +mongo:4.0.5 + +$ kubectl get pods -n demo mg-sharding-shard0-0 -o=jsonpath='{.spec.containers[0].image}{"\n"}' +mongo:4.0.5 + +$ kubectl get pods -n demo mg-sharding-mongos-0 -o=jsonpath='{.spec.containers[0].image}{"\n"}' +mongo:4.0.5 +``` + +You can see from above, our `RabbitMQ` sharded database has been updated with the new version. So, the update process is successfully completed. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-sharding +kubectl delete RabbitMQopsrequest -n demo mops-shard-update +``` \ No newline at end of file diff --git a/docs/guides/rabbitmq/update-version/standalone.md b/docs/guides/rabbitmq/update-version/standalone.md new file mode 100644 index 0000000000..7c1805caa9 --- /dev/null +++ b/docs/guides/rabbitmq/update-version/standalone.md @@ -0,0 +1,263 @@ +--- +title: Updating RabbitMQ Standalone +menu: + docs_{{ .version }}: + identifier: mg-updating-standalone + name: Standalone + parent: mg-updating + weight: 20 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# update version of RabbitMQ Standalone + +This guide will show you how to use `KubeDB` Ops-manager operator to update the version of `RabbitMQ` standalone. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + - [Updating Overview](/docs/guides/RabbitMQ/update-version/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/RabbitMQ](/docs/examples/RabbitMQ) directory of [kubedb/docs](https://github.com/kube/docs) repository. + +### Prepare RabbitMQ Standalone Database + +Now, we are going to deploy a `RabbitMQ` standalone database with version `3.6.8`. + +### Deploy RabbitMQ standalone : + +In this section, we are going to deploy a RabbitMQ standalone database. Then, in the next section we will update the version of the database using `RabbitMQOpsRequest` CRD. Below is the YAML of the `RabbitMQ` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mg-standalone + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the `RabbitMQ` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/update-version/mg-standalone.yaml +RabbitMQ.kubedb.com/mg-standalone created +``` + +Now, wait until `mg-standalone` created has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo + NAME VERSION STATUS AGE + mg-standalone 4.4.26 Ready 8m58s +``` + +We are now ready to apply the `RabbitMQOpsRequest` CR to update this database. + +### update RabbitMQ Version + +Here, we are going to update `RabbitMQ` standalone from `3.6.8` to `4.0.5`. + +#### Create RabbitMQOpsRequest: + +In order to update the standalone database, we have to create a `RabbitMQOpsRequest` CR with your desired version that is supported by `KubeDB`. Below is the YAML of the `RabbitMQOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-update + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: mg-standalone + updateVersion: + targetVersion: 4.4.26 + readinessCriteria: + oplogMaxLagSeconds: 20 + objectsCountDiffPercentage: 10 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing operation on `mg-standalone` RabbitMQ database. +- `spec.type` specifies that we are going to perform `UpdateVersion` on our database. +- `spec.updateVersion.targetVersion` specifies the expected version of the database `4.0.5`. +- Have a look [here](/docs/guides/RabbitMQ/concepts/opsrequest.md#specreadinesscriteria) on the respective sections to understand the `readinessCriteria`, `timeout` & `apply` fields. + + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/update-version/mops-update-standalone.yaml +RabbitMQopsrequest.ops.kubedb.com/mops-update created +``` + +#### Verify RabbitMQ version updated successfully : + +If everything goes well, `KubeDB` Ops-manager operator will update the image of `RabbitMQ` object and related `StatefulSets` and `Pods`. + +Let's wait for `RabbitMQOpsRequest` to be `Successful`. Run the following command to watch `RabbitMQOpsRequest` CR, + +```bash +$ kubectl get RabbitMQopsrequest -n demo +Every 2.0s: kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-update UpdateVersion Successful 3m45s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to update the database. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-update +Name: mops-update +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2022-10-26T10:06:50Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:apply: + f:databaseRef: + f:readinessCriteria: + .: + f:objectsCountDiffPercentage: + f:oplogMaxLagSeconds: + f:timeout: + f:type: + f:updateVersion: + .: + f:targetVersion: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-10-26T10:06:50Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-10-26T10:08:25Z + Resource Version: 605817 + UID: 79faadf6-7af9-4b74-9907-febe7d543386 +Spec: + Apply: IfReady + Database Ref: + Name: mg-standalone + Readiness Criteria: + Objects Count Diff Percentage: 10 + Oplog Max Lag Seconds: 20 + Timeout: 5m + Type: UpdateVersion + UpdateVersion: + Target Version: 4.4.26 +Status: + Conditions: + Last Transition Time: 2022-10-26T10:07:10Z + Message: RabbitMQ ops request is update-version database version + Observed Generation: 1 + Reason: UpdateVersion + Status: True + Type: UpdateVersion + Last Transition Time: 2022-10-26T10:07:30Z + Message: Successfully updated statefulsets update strategy type + Observed Generation: 1 + Reason: UpdateStatefulSets + Status: True + Type: UpdateStatefulSets + Last Transition Time: 2022-10-26T10:08:25Z + Message: Successfully Updated Standalone Image + Observed Generation: 1 + Reason: UpdateStandaloneImage + Status: True + Type: UpdateStandaloneImage + Last Transition Time: 2022-10-26T10:08:25Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 2m5s KubeDB Ops-manager Operator Pausing RabbitMQ demo/mg-standalone + Normal PauseDatabase 2m5s KubeDB Ops-manager Operator Successfully paused RabbitMQ demo/mg-standalone + Normal Updating 2m5s KubeDB Ops-manager Operator Updating StatefulSets + Normal Updating 105s KubeDB Ops-manager Operator Successfully Updated StatefulSets + Normal UpdateStandaloneImage 50s KubeDB Ops-manager Operator Successfully Updated Standalone Image + Normal ResumeDatabase 50s KubeDB Ops-manager Operator Resuming RabbitMQ demo/mg-standalone + Normal ResumeDatabase 50s KubeDB Ops-manager Operator Successfully resumed RabbitMQ demo/mg-standalone + Normal Successful 50s KubeDB Ops-manager Operator Successfully Updated Database + +``` + +Now, we are going to verify whether the `RabbitMQ` and the related `StatefulSets` their `Pods` have the new version image. Let's check, + +```bash +$ kubectl get mg -n demo mg-standalone -o=jsonpath='{.spec.version}{"\n"}' +4.4.26 + +$ kubectl get sts -n demo mg-standalone -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}' +mongo:4.0.5 + +$ kubectl get pods -n demo mg-standalone-0 -o=jsonpath='{.spec.containers[0].image}{"\n"}' +mongo:4.0.5 +``` + +You can see from above, our `RabbitMQ` standalone database has been updated with the new version. So, the update process is successfully completed. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-standalone +kubectl delete RabbitMQopsrequest -n demo mops-update +``` \ No newline at end of file diff --git a/docs/guides/rabbitmq/volume-expansion/_index.md b/docs/guides/rabbitmq/volume-expansion/_index.md new file mode 100644 index 0000000000..e4cce90e11 --- /dev/null +++ b/docs/guides/rabbitmq/volume-expansion/_index.md @@ -0,0 +1,10 @@ +--- +title: Volume Expansion +menu: + docs_{{ .version }}: + identifier: mg-volume-expansion + name: Volume Expansion + parent: mg-RabbitMQ-guides + weight: 44 +menu_name: docs_{{ .version }} +--- \ No newline at end of file diff --git a/docs/guides/rabbitmq/volume-expansion/overview.md b/docs/guides/rabbitmq/volume-expansion/overview.md new file mode 100644 index 0000000000..3bb7c5c43c --- /dev/null +++ b/docs/guides/rabbitmq/volume-expansion/overview.md @@ -0,0 +1,56 @@ +--- +title: RabbitMQ Volume Expansion Overview +menu: + docs_{{ .version }}: + identifier: mg-volume-expansion-overview + name: Overview + parent: mg-volume-expansion + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# RabbitMQ Volume Expansion + +This guide will give an overview on how KubeDB Ops-manager operator expand the volume of various component of `RabbitMQ` such as ReplicaSet, Shard, ConfigServer, Mongos, etc. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + +## How Volume Expansion Process Works + +The following diagram shows how KubeDB Ops-manager operator expand the volumes of `RabbitMQ` database components. Open the image in a new tab to see the enlarged version. + +
+  Volume Expansion process of RabbitMQ +
Fig: Volume Expansion process of RabbitMQ
+
+ +The Volume Expansion process consists of the following steps: + +1. At first, a user creates a `RabbitMQ` Custom Resource (CR). + +2. `KubeDB` Provisioner operator watches the `RabbitMQ` CR. + +3. When the operator finds a `RabbitMQ` CR, it creates required number of `StatefulSets` and related necessary stuff like secrets, services, etc. + +4. Each StatefulSet creates a Persistent Volume according to the Volume Claim Template provided in the statefulset configuration. This Persistent Volume will be expanded by the `KubeDB` Ops-manager operator. + +5. Then, in order to expand the volume of the various components (ie. ReplicaSet, Shard, ConfigServer, Mongos, etc.) of the `RabbitMQ` database the user creates a `RabbitMQOpsRequest` CR with desired information. + +6. `KubeDB` Ops-manager operator watches the `RabbitMQOpsRequest` CR. + +7. When it finds a `RabbitMQOpsRequest` CR, it halts the `RabbitMQ` object which is referred from the `RabbitMQOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `RabbitMQ` object during the volume expansion process. + +8. Then the `KubeDB` Ops-manager operator will expand the persistent volume to reach the expected size defined in the `RabbitMQOpsRequest` CR. + +9. After the successful Volume Expansion of the related StatefulSet Pods, the `KubeDB` Ops-manager operator updates the new volume size in the `RabbitMQ` object to reflect the updated state. + +10. After the successful Volume Expansion of the `RabbitMQ` components, the `KubeDB` Ops-manager operator resumes the `RabbitMQ` object so that the `KubeDB` Provisioner operator resumes its usual operations. + +In the next docs, we are going to show a step-by-step guide on Volume Expansion of various RabbitMQ database components using `RabbitMQOpsRequest` CRD. diff --git a/docs/guides/rabbitmq/volume-expansion/replicaset.md b/docs/guides/rabbitmq/volume-expansion/replicaset.md new file mode 100644 index 0000000000..64cf723751 --- /dev/null +++ b/docs/guides/rabbitmq/volume-expansion/replicaset.md @@ -0,0 +1,247 @@ +--- +title: RabbitMQ Replicaset Volume Expansion +menu: + docs_{{ .version }}: + identifier: mg-volume-expansion-replicaset + name: Replicaset + parent: mg-volume-expansion + weight: 30 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# RabbitMQ Replicaset Volume Expansion + +This guide will show you how to use `KubeDB` Ops-manager operator to expand the volume of a RabbitMQ Replicaset database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- You must have a `StorageClass` that supports volume expansion. + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [Replicaset](/docs/guides/RabbitMQ/clustering/replicaset.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + - [Volume Expansion Overview](/docs/guides/RabbitMQ/volume-expansion/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: The yaml files used in this tutorial are stored in [docs/examples/RabbitMQ](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/RabbitMQ) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Expand Volume of Replicaset + +Here, we are going to deploy a `RabbitMQ` replicaset using a supported version by `KubeDB` operator. Then we are going to apply `RabbitMQOpsRequest` to expand its volume. + +### Prepare RabbitMQ Replicaset Database + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) kubernetes.io/gce-pd Delete Immediate true 2m49s +``` + +We can see from the output the `standard` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. + +Now, we are going to deploy a `RabbitMQ` replicaSet database with version `4.4.26`. + +### Deploy RabbitMQ + +In this section, we are going to deploy a RabbitMQ Replicaset database with 1GB volume. Then, in the next section we will expand its volume to 2GB using `RabbitMQOpsRequest` CRD. Below is the YAML of the `RabbitMQ` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mg-replicaset + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "replicaset" + replicas: 3 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the `RabbitMQ` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/volume-expansion/mg-replicaset.yaml +RabbitMQ.kubedb.com/mg-replicaset created +``` + +Now, wait until `mg-replicaset` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-replicaset 4.4.26 Ready 10m +``` + +Let's check volume size from statefulset, and from the persistent volume, + +```bash +$ kubectl get sts -n demo mg-replicaset -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-2067c63d-f982-4b66-a008-5e9c3ff6218a 1Gi RWO Delete Bound demo/datadir-mg-replicaset-0 standard 10m +pvc-9db1aeb0-f1af-4555-93a3-0ca754327751 1Gi RWO Delete Bound demo/datadir-mg-replicaset-2 standard 9m45s +pvc-d38f42a8-50d4-4fa9-82ba-69fc7a464ff4 1Gi RWO Delete Bound demo/datadir-mg-replicaset-1 standard 10m +``` + +You can see the statefulset has 1GB storage, and the capacity of all the persistent volumes are also 1GB. + +We are now ready to apply the `RabbitMQOpsRequest` CR to expand the volume of this database. + +### Volume Expansion + +Here, we are going to expand the volume of the replicaset database. + +#### Create RabbitMQOpsRequest + +In order to expand the volume of the database, we have to create a `RabbitMQOpsRequest` CR with our desired volume size. Below is the YAML of the `RabbitMQOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-volume-exp-replicaset + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: mg-replicaset + volumeExpansion: + replicaSet: 2Gi + mode: Online +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing volume expansion operation on `mops-volume-exp-replicaset` database. +- `spec.type` specifies that we are performing `VolumeExpansion` on our database. +- `spec.volumeExpansion.replicaSet` specifies the desired volume size. + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/volume-expansion/mops-volume-exp-replicaset.yaml +RabbitMQopsrequest.ops.kubedb.com/mops-volume-exp-replicaset created +``` + +#### Verify RabbitMQ replicaset volume expanded successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the volume size of `RabbitMQ` object and related `StatefulSets` and `Persistent Volumes`. + +Let's wait for `RabbitMQOpsRequest` to be `Successful`. Run the following command to watch `RabbitMQOpsRequest` CR, + +```bash +$ kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-volume-exp-replicaset VolumeExpansion Successful 83s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-volume-exp-replicaset +Name: mops-volume-exp-replicaset +Namespace: demo +Labels: +Annotations: API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2020-08-25T18:21:18Z + Finalizers: + kubedb.com + Generation: 1 + Resource Version: 84084 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/RabbitMQopsrequests/mops-volume-exp-replicaset + UID: 2cec0cd3-4abe-4114-813c-1326f28563cb +Spec: + Database Ref: + Name: mg-replicaset + Type: VolumeExpansion + Volume Expansion: + ReplicaSet: 2Gi +Status: + Conditions: + Last Transition Time: 2020-08-25T18:21:18Z + Message: RabbitMQ ops request is being processed + Observed Generation: 1 + Reason: Scaling + Status: True + Type: Scaling + Last Transition Time: 2020-08-25T18:22:38Z + Message: Successfully updated Storage + Observed Generation: 1 + Reason: VolumeExpansion + Status: True + Type: VolumeExpansion + Last Transition Time: 2020-08-25T18:22:38Z + Message: Successfully Resumed RabbitMQ: mg-replicaset + Observed Generation: 1 + Reason: ResumeDatabase + Status: True + Type: ResumeDatabase + Last Transition Time: 2020-08-25T18:22:38Z + Message: Successfully completed the modification process + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal VolumeExpansion 3m11s KubeDB Ops-manager operator Successfully Updated Storage + Normal ResumeDatabase 3m11s KubeDB Ops-manager operator Resuming RabbitMQ + Normal ResumeDatabase 3m11s KubeDB Ops-manager operator Successfully Resumed RabbitMQ + Normal Successful 3m11s KubeDB Ops-manager operator Successfully Scaled Database +``` + +Now, we are going to verify from the `Statefulset`, and the `Persistent Volumes` whether the volume of the database has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get sts -n demo mg-replicaset -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"2Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-2067c63d-f982-4b66-a008-5e9c3ff6218a 2Gi RWO Delete Bound demo/datadir-mg-replicaset-0 standard 19m +pvc-9db1aeb0-f1af-4555-93a3-0ca754327751 2Gi RWO Delete Bound demo/datadir-mg-replicaset-2 standard 18m +pvc-d38f42a8-50d4-4fa9-82ba-69fc7a464ff4 2Gi RWO Delete Bound demo/datadir-mg-replicaset-1 standard 19m +``` + +The above output verifies that we have successfully expanded the volume of the RabbitMQ database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-replicaset +kubectl delete RabbitMQopsrequest -n demo mops-volume-exp-replicaset +``` \ No newline at end of file diff --git a/docs/guides/rabbitmq/volume-expansion/sharding.md b/docs/guides/rabbitmq/volume-expansion/sharding.md new file mode 100644 index 0000000000..934c6f6c28 --- /dev/null +++ b/docs/guides/rabbitmq/volume-expansion/sharding.md @@ -0,0 +1,280 @@ +--- +title: RabbitMQ Sharded Database Volume Expansion +menu: + docs_{{ .version }}: + identifier: mg-volume-expansion-shard + name: Sharding + parent: mg-volume-expansion + weight: 40 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# RabbitMQ Sharded Database Volume Expansion + +This guide will show you how to use `KubeDB` Ops-manager operator to expand the volume of a RabbitMQ Sharded Database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- You must have a `StorageClass` that supports volume expansion. + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [Sharding](/docs/guides/RabbitMQ/clustering/sharding.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + - [Volume Expansion Overview](/docs/guides/RabbitMQ/volume-expansion/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: The yaml files used in this tutorial are stored in [docs/examples/RabbitMQ](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/RabbitMQ) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Expand Volume of Sharded Database + +Here, we are going to deploy a `RabbitMQ` Sharded Database using a supported version by `KubeDB` operator. Then we are going to apply `RabbitMQOpsRequest` to expand the volume of shard nodes and config servers. + +### Prepare RabbitMQ Sharded Database + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) kubernetes.io/gce-pd Delete Immediate true 2m49s +``` + +We can see from the output the `standard` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. + +Now, we are going to deploy a `RabbitMQ` standalone database with version `4.4.26`. + +### Deploy RabbitMQ + +In this section, we are going to deploy a RabbitMQ Sharded database with 1GB volume for each of the shard nodes and config servers. Then, in the next sections we will expand the volume of shard nodes and config servers to 2GB using `RabbitMQOpsRequest` CRD. Below is the YAML of the `RabbitMQ` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mg-sharding + namespace: demo +spec: + version: 4.4.26 + shardTopology: + configServer: + replicas: 2 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard + mongos: + replicas: 2 + shard: + replicas: 2 + shards: 3 + storage: + resources: + requests: + storage: 1Gi + storageClassName: standard +``` + +Let's create the `RabbitMQ` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/volume-expansion/mg-shard.yaml +RabbitMQ.kubedb.com/mg-sharding created +``` + +Now, wait until `mg-sharding` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-sharding 4.4.26 Ready 2m45s +``` + +Let's check volume size from statefulset, and from the persistent volume of shards and config servers, + +```bash +$ kubectl get sts -n demo mg-sharding-configsvr -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get sts -n demo mg-sharding-shard0 -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-194f6e9c-b9a7-4d00-a125-a6c01273468c 1Gi RWO Delete Bound demo/datadir-mg-sharding-shard0-0 standard 68s +pvc-390b6343-f97e-4761-a516-e3c9607c55d6 1Gi RWO Delete Bound demo/datadir-mg-sharding-shard1-1 standard 2m26s +pvc-51ab98e8-d468-4a74-b176-3853dada41c2 1Gi RWO Delete Bound demo/datadir-mg-sharding-configsvr-1 standard 2m33s +pvc-5209095e-561f-4601-a0bf-0c705234da5b 1Gi RWO Delete Bound demo/datadir-mg-sharding-shard1-0 standard 3m6s +pvc-5be2ab13-e12c-4053-8680-7c5588dff8eb 1Gi RWO Delete Bound demo/datadir-mg-sharding-shard2-1 standard 2m32s +pvc-7e11502d-13e0-4a84-9ebe-29bc2b15f026 1Gi RWO Delete Bound demo/datadir-mg-sharding-shard0-1 standard 44s +pvc-7e20906c-462d-47b7-b4cf-ba0ef69ba26e 1Gi RWO Delete Bound demo/datadir-mg-sharding-shard2-0 standard 3m7s +pvc-87634059-0f95-4595-ae8a-121944961103 1Gi RWO Delete Bound demo/datadir-mg-sharding-configsvr-0 standard 3m7s +``` + +You can see the statefulsets have 1GB storage, and the capacity of all the persistent volumes are also 1GB. + +We are now ready to apply the `RabbitMQOpsRequest` CR to expand the volume of this database. + +### Volume Expansion of Shard and ConfigServer Nodes + +Here, we are going to expand the volume of the shard and configServer nodes of the database. + +#### Create RabbitMQOpsRequest + +In order to expand the volume of the shard nodes of the database, we have to create a `RabbitMQOpsRequest` CR with our desired volume size. Below is the YAML of the `RabbitMQOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-volume-exp-shard + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: mg-sharding + volumeExpansion: + mode: "Online" + shard: 2Gi + configServer: 2Gi +``` + +Here, +- `spec.databaseRef.name` specifies that we are performing volume expansion operation on `mops-volume-exp-shard` database. +- `spec.type` specifies that we are performing `VolumeExpansion` on our database. +- `spec.volumeExpansion.shard` specifies the desired volume size of shard nodes. +- `spec.volumeExpansion.configServer` specifies the desired volume size of configServer nodes. + +> **Note:** If you don't want to expand the volume of all the components together, you can only specify the components (shard and configServer) that you want to expand. + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/volume-expansion/mops-volume-exp-shard.yaml +RabbitMQopsrequest.ops.kubedb.com/mops-volume-exp-shard created +``` + +#### Verify RabbitMQ shard volumes expanded successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the volume size of `RabbitMQ` object and related `StatefulSets` and `Persistent Volumes`. + +Let's wait for `RabbitMQOpsRequest` to be `Successful`. Run the following command to watch `RabbitMQOpsRequest` CR, + +```bash +$ kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-volume-exp-shard VolumeExpansion Successful 3m49s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-volume-exp-shard +Name: mops-volume-exp-shard +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: RabbitMQOpsRequest +Metadata: + Creation Timestamp: 2020-09-30T04:24:37Z + Generation: 1 + Resource Version: 140791 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/RabbitMQopsrequests/mops-volume-exp-shard + UID: fc23a0a2-3a48-4b76-95c5-121f3d56df78 +Spec: + Database Ref: + Name: mg-sharding + Type: VolumeExpansion + Volume Expansion: + Config Server: 2Gi + Shard: 2Gi +Status: + Conditions: + Last Transition Time: 2020-09-30T04:25:48Z + Message: RabbitMQ ops request is expanding volume of database + Observed Generation: 1 + Reason: VolumeExpansion + Status: True + Type: VolumeExpansion + Last Transition Time: 2020-09-30T04:26:58Z + Message: Successfully Expanded Volume + Observed Generation: 1 + Reason: ConfigServerVolumeExpansion + Status: True + Type: ConfigServerVolumeExpansion + Last Transition Time: 2020-09-30T04:29:28Z + Message: Successfully Expanded Volume + Observed Generation: 1 + Reason: ShardVolumeExpansion + Status: True + Type: ShardVolumeExpansion + Last Transition Time: 2020-09-30T04:29:33Z + Message: Successfully Resumed RabbitMQ: mg-sharding + Observed Generation: 1 + Reason: ResumeDatabase + Status: True + Type: ResumeDatabase + Last Transition Time: 2020-09-30T04:29:33Z + Message: Successfully Expanded Volume + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal ConfigServerVolumeExpansion 3m25s KubeDB Ops-manager operator Successfully Expanded Volume + Normal ShardVolumeExpansion 55s KubeDB Ops-manager operator Successfully Expanded Volume + Normal ResumeDatabase 50s KubeDB Ops-manager operator Resuming RabbitMQ + Normal ResumeDatabase 50s KubeDB Ops-manager operator Successfully Resumed RabbitMQ + Normal Successful 50s KubeDB Ops-manager operator Successfully Expanded Volume +``` + +Now, we are going to verify from the `Statefulset`, and the `Persistent Volumes` whether the volume of the database has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get sts -n demo mg-sharding-configsvr -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"2Gi" + +$ kubectl get sts -n demo mg-sharding-shard0 -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"2Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-194f6e9c-b9a7-4d00-a125-a6c01273468c 2Gi RWO Delete Bound demo/datadir-mg-sharding-shard0-0 standard 3m38s +pvc-390b6343-f97e-4761-a516-e3c9607c55d6 2Gi RWO Delete Bound demo/datadir-mg-sharding-shard1-1 standard 4m56s +pvc-51ab98e8-d468-4a74-b176-3853dada41c2 2Gi RWO Delete Bound demo/datadir-mg-sharding-configsvr-1 standard 5m3s +pvc-5209095e-561f-4601-a0bf-0c705234da5b 2Gi RWO Delete Bound demo/datadir-mg-sharding-shard1-0 standard 5m36s +pvc-5be2ab13-e12c-4053-8680-7c5588dff8eb 2Gi RWO Delete Bound demo/datadir-mg-sharding-shard2-1 standard 5m2s +pvc-7e11502d-13e0-4a84-9ebe-29bc2b15f026 2Gi RWO Delete Bound demo/datadir-mg-sharding-shard0-1 standard 3m14s +pvc-7e20906c-462d-47b7-b4cf-ba0ef69ba26e 2Gi RWO Delete Bound demo/datadir-mg-sharding-shard2-0 standard 5m37s +pvc-87634059-0f95-4595-ae8a-121944961103 2Gi RWO Delete Bound demo/datadir-mg-sharding-configsvr-0 standard 5m37s +``` + +The above output verifies that we have successfully expanded the volume of the shard nodes and configServer nodes of the RabbitMQ database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-sharding +kubectl delete RabbitMQopsrequest -n demo mops-volume-exp-shard mops-volume-exp-configserver +``` diff --git a/docs/guides/rabbitmq/volume-expansion/standalone.md b/docs/guides/rabbitmq/volume-expansion/standalone.md new file mode 100644 index 0000000000..44e0be640d --- /dev/null +++ b/docs/guides/rabbitmq/volume-expansion/standalone.md @@ -0,0 +1,242 @@ +--- +title: RabbitMQ Standalone Volume Expansion +menu: + docs_{{ .version }}: + identifier: mg-volume-expansion-standalone + name: Standalone + parent: mg-volume-expansion + weight: 20 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# RabbitMQ Standalone Volume Expansion + +This guide will show you how to use `KubeDB` Ops-manager operator to expand the volume of a RabbitMQ standalone database. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- You must have a `StorageClass` that supports volume expansion. + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [RabbitMQ](/docs/guides/RabbitMQ/concepts/RabbitMQ.md) + - [RabbitMQOpsRequest](/docs/guides/RabbitMQ/concepts/opsrequest.md) + - [Volume Expansion Overview](/docs/guides/RabbitMQ/volume-expansion/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: The yaml files used in this tutorial are stored in [docs/examples/RabbitMQ](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/RabbitMQ) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Expand Volume of Standalone Database + +Here, we are going to deploy a `RabbitMQ` standalone using a supported version by `KubeDB` operator. Then we are going to apply `RabbitMQOpsRequest` to expand its volume. + +### Prepare RabbitMQ Standalone Database + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) kubernetes.io/gce-pd Delete Immediate true 2m49s +``` + +We can see from the output the `standard` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. + +Now, we are going to deploy a `RabbitMQ` standalone database with version `4.4.26`. + +#### Deploy RabbitMQ standalone + +In this section, we are going to deploy a RabbitMQ standalone database with 1GB volume. Then, in the next section we will expand its volume to 2GB using `RabbitMQOpsRequest` CRD. Below is the YAML of the `RabbitMQ` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: RabbitMQ +metadata: + name: mg-standalone + namespace: demo +spec: + version: "4.4.26" + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the `RabbitMQ` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/volume-expansion/mg-standalone.yaml +RabbitMQ.kubedb.com/mg-standalone created +``` + +Now, wait until `mg-standalone` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-standalone 4.4.26 Ready 2m53s +``` + +Let's check volume size from statefulset, and from the persistent volume, + +```bash +$ kubectl get sts -n demo mg-standalone -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-d0b07657-a012-4384-862a-b4e437774287 1Gi RWO Delete Bound demo/datadir-mg-standalone-0 standard 49s +``` + +You can see the statefulset has 1GB storage, and the capacity of the persistent volume is also 1GB. + +We are now ready to apply the `RabbitMQOpsRequest` CR to expand the volume of this database. + +### Volume Expansion + +Here, we are going to expand the volume of the standalone database. + +#### Create RabbitMQOpsRequest + +In order to expand the volume of the database, we have to create a `RabbitMQOpsRequest` CR with our desired volume size. Below is the YAML of the `RabbitMQOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: RabbitMQOpsRequest +metadata: + name: mops-volume-exp-standalone + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: mg-standalone + volumeExpansion: + standalone: 2Gi + mode: Online +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing volume expansion operation on `mg-standalone` database. +- `spec.type` specifies that we are performing `VolumeExpansion` on our database. +- `spec.volumeExpansion.standalone` specifies the desired volume size. +- `spec.volumeExpansion.mode` specifies the desired volume expansion mode(`Online` or `Offline`). + +During `Online` VolumeExpansion KubeDB expands volume without pausing database object, it directly updates the underlying PVC. And for `Offline` volume expansion, the database is paused. The Pods are deleted and PVC is updated. Then the database Pods are recreated with updated PVC. + +Let's create the `RabbitMQOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/RabbitMQ/volume-expansion/mops-volume-exp-standalone.yaml +RabbitMQopsrequest.ops.kubedb.com/mops-volume-exp-standalone created +``` + +#### Verify RabbitMQ Standalone volume expanded successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the volume size of `RabbitMQ` object and related `StatefulSets` and `Persistent Volume`. + +Let's wait for `RabbitMQOpsRequest` to be `Successful`. Run the following command to watch `RabbitMQOpsRequest` CR, + +```bash +$ kubectl get RabbitMQopsrequest -n demo +NAME TYPE STATUS AGE +mops-volume-exp-standalone VolumeExpansion Successful 75s +``` + +We can see from the above output that the `RabbitMQOpsRequest` has succeeded. If we describe the `RabbitMQOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database. + +```bash +$ kubectl describe RabbitMQopsrequest -n demo mops-volume-exp-standalone + Name: mops-volume-exp-standalone + Namespace: demo + Labels: + Annotations: API Version: ops.kubedb.com/v1alpha1 + Kind: RabbitMQOpsRequest + Metadata: + Creation Timestamp: 2020-08-25T17:48:33Z + Finalizers: + kubedb.com + Generation: 1 + Resource Version: 72899 + Self Link: /apis/ops.kubedb.com/v1alpha1/namespaces/demo/RabbitMQopsrequests/mops-volume-exp-standalone + UID: 007fe35a-25f6-45e7-9e85-9add488b2622 + Spec: + Database Ref: + Name: mg-standalone + Type: VolumeExpansion + Volume Expansion: + Standalone: 2Gi + Status: + Conditions: + Last Transition Time: 2020-08-25T17:48:33Z + Message: RabbitMQ ops request is being processed + Observed Generation: 1 + Reason: Scaling + Status: True + Type: Scaling + Last Transition Time: 2020-08-25T17:50:03Z + Message: Successfully updated Storage + Observed Generation: 1 + Reason: VolumeExpansion + Status: True + Type: VolumeExpansion + Last Transition Time: 2020-08-25T17:50:03Z + Message: Successfully Resumed RabbitMQ: mg-standalone + Observed Generation: 1 + Reason: ResumeDatabase + Status: True + Type: ResumeDatabase + Last Transition Time: 2020-08-25T17:50:03Z + Message: Successfully completed the modification process + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful + Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal VolumeExpansion 29s KubeDB Ops-manager operator Successfully Updated Storage + Normal ResumeDatabase 29s KubeDB Ops-manager operator Resuming RabbitMQ + Normal ResumeDatabase 29s KubeDB Ops-manager operator Successfully Resumed RabbitMQ + Normal Successful 29s KubeDB Ops-manager operator Successfully Scaled Database +``` + +Now, we are going to verify from the `Statefulset`, and the `Persistent Volume` whether the volume of the standalone database has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get sts -n demo mg-standalone -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"2Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-d0b07657-a012-4384-862a-b4e437774287 2Gi RWO Delete Bound demo/datadir-mg-standalone-0 standard 4m29s +``` + +The above output verifies that we have successfully expanded the volume of the RabbitMQ standalone database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-standalone +kubectl delete RabbitMQopsrequest -n demo mops-volume-exp-standalone +```