## User Guide
-- [Quickstart Druid](/docs/guides/druid/quickstart/overview/index.md) with KubeDB Operator.
-
-[//]: # (- Druid Clustering supported by KubeDB)
-
-[//]: # ( - [Topology Clustering](/docs/guides/druid/clustering/topology-cluster/index.md))
-
-[//]: # (- Use [kubedb cli](/docs/guides/druid/cli/cli.md) to manage databases like kubectl for Kubernetes.)
-
+- [Quickstart Druid](/docs/guides/druid/quickstart/guide/index.md) with KubeDB Operator.
+- [Druid Clustering](/docs/guides/druid/clustering/overview/index.md) with KubeDB Operator.
+- [Backup & Restore](/docs/guides/druid/backup/overview/index.md) Druid databases using KubeStash.
+- Start [Druid with Custom Config](/docs/guides/druid/configuration/_index.md).
+- Monitor your Druid database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/druid/monitoring/using-prometheus-operator.md).
+- Monitor your Druid database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/druid/monitoring/using-builtin-prometheus.md).
- Detail concepts of [Druid object](/docs/guides/druid/concepts/druid.md).
-
-[//]: # (- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).)
\ No newline at end of file
+- Detail concepts of [DruidVersion object](/docs/guides/druid/concepts/druidversion.md).
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
\ No newline at end of file
diff --git a/docs/guides/druid/autoscaler/_index.md b/docs/guides/druid/autoscaler/_index.md
new file mode 100644
index 0000000000..a39f2bfba3
--- /dev/null
+++ b/docs/guides/druid/autoscaler/_index.md
@@ -0,0 +1,10 @@
+---
+title: Autoscaling
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-autoscaler
+ name: Autoscaling
+ parent: guides-druid
+ weight: 100
+menu_name: docs_{{ .version }}
+---
\ No newline at end of file
diff --git a/docs/guides/druid/autoscaler/compute/_index.md b/docs/guides/druid/autoscaler/compute/_index.md
new file mode 100644
index 0000000000..c2c1eea280
--- /dev/null
+++ b/docs/guides/druid/autoscaler/compute/_index.md
@@ -0,0 +1,10 @@
+---
+title: Compute Autoscaling
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-autoscaler-compute
+ name: Compute Autoscaling
+ parent: guides-druid-autoscaler
+ weight: 46
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/druid/autoscaler/compute/guide.md b/docs/guides/druid/autoscaler/compute/guide.md
new file mode 100644
index 0000000000..b0810f67b2
--- /dev/null
+++ b/docs/guides/druid/autoscaler/compute/guide.md
@@ -0,0 +1,864 @@
+---
+title: Druid Topology Autoscaling
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-autoscaler-compute-guide
+ name: Druid Compute Autoscaling
+ parent: guides-druid-autoscaler-compute
+ weight: 20
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Autoscaling the Compute Resource of a Druid Topology Cluster
+
+This guide will show you how to use `KubeDB` to autoscale compute resources i.e. cpu and memory of a Druid topology cluster.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster.
+
+- Install `KubeDB` Provisioner, Ops-manager and Autoscaler operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation)
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [Druid](/docs/guides/druid/concepts/druid.md)
+ - [DruidAutoscaler](/docs/guides/druid/concepts/druidautoscaler.md)
+ - [DruidOpsRequest](/docs/guides/druid/concepts/druidopsrequest.md)
+ - [Compute Resource Autoscaling Overview](/docs/guides/druid/autoscaler/compute/overview.md)
+
+To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial.
+
+```bash
+$ kubectl create ns demo
+namespace/demo created
+```
+
+> **Note:** YAML files used in this tutorial are stored in [docs/examples/druid](/docs/examples/druid) directory of [kubedb/docs](https://github.com/kubedb/docs) repository.
+
+## Autoscaling of Topology Cluster
+
+Here, we are going to deploy a `Druid` Topology Cluster using a supported version by `KubeDB` operator. Then we are going to apply `DruidAutoscaler` to set up autoscaling.
+
+### Create External Dependency (Deep Storage)
+
+Before proceeding further, we need to prepare deep storage, which is one of the external dependency of Druid and used for storing the segments. It is a storage mechanism that Apache Druid does not provide. **Amazon S3**, **Google Cloud Storage**, or **Azure Blob Storage**, **S3-compatible storage** (like **Minio**), or **HDFS** are generally convenient options for deep storage.
+
+In this tutorial, we will run a `minio-server` as deep storage in our local `kind` cluster using `minio-operator` and create a bucket named `druid` in it, which the deployed druid database will use.
+
+```bash
+$ helm repo add minio https://operator.min.io/
+$ helm repo update minio
+$ helm upgrade --install --namespace "minio-operator" --create-namespace "minio-operator" minio/operator --set operator.replicaCount=1
+
+$ helm upgrade --install --namespace "demo" --create-namespace druid-minio minio/tenant \
+--set tenant.pools[0].servers=1 \
+--set tenant.pools[0].volumesPerServer=1 \
+--set tenant.pools[0].size=1Gi \
+--set tenant.certificate.requestAutoCert=false \
+--set tenant.buckets[0].name="druid" \
+--set tenant.pools[0].name="default"
+
+```
+
+Now we need to create a `Secret` named `deep-storage-config`. It contains the necessary connection information using which the druid database will connect to the deep storage.
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+```
+
+Let’s create the `deep-storage-config` Secret shown above:
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/autoscaler/compute/yamls/deep-storage-config.yaml
+secret/deep-storage-config created
+```
+
+Now, we are going to deploy a `Druid` combined cluster with version `28.0.1`.
+
+### Deploy Druid Cluster
+
+In this section, we are going to deploy a Druid Topology cluster with version `28.0.1`. Then, in the next section we will set up autoscaling for this database using `DruidAutoscaler` CRD. Below is the YAML of the `Druid` CR that we are going to create,
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ deletionPolicy: WipeOut
+```
+
+Let's create the `Druid` CRO we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/autoscaler/compute/yamls/druid-cluster.yaml
+druid.kubedb.com/druid-cluster created
+```
+
+Now, wait until `druid-cluster` has status `Ready`. i.e,
+
+```bash
+$ kubectl get kf -n demo -w
+NAME TYPE VERSION STATUS AGE
+druid-cluster kubedb.com/v1alpha2 28.0.1 Provisioning 0s
+druid-cluster kubedb.com/v1alpha2 28.0.1 Provisioning 24s
+.
+.
+druid-cluster kubedb.com/v1alpha2 28.0.1 Ready 118s
+```
+
+## Druid Topology Autoscaler
+
+Let's check the Druid resources for coordinators and historicals,
+
+```bash
+$ kubectl get druid -n demo druid-cluster -o json | jq '.spec.topology.coordinators.podTemplate.spec.containers[].resources'
+{
+ "limits": {
+ "memory": "1Gi"
+ },
+ "requests": {
+ "cpu": "500m",
+ "memory": "1Gi"
+ }
+}
+
+$ kubectl get druid -n demo druid-cluster -o json | jq '.spec.topology.historicals.podTemplate.spec.containers[].resources'
+{
+ "limits": {
+ "memory": "1Gi"
+ },
+ "requests": {
+ "cpu": "500m",
+ "memory": "1Gi"
+ }
+}
+```
+
+Let's check the coordinators and historicals Pod containers resources,
+
+```bash
+$ kubectl get pod -n demo druid-cluster-coordinators-0 -o json | jq '.spec.containers[].resources'
+{
+ "limits": {
+ "memory": "1Gi"
+ },
+ "requests": {
+ "cpu": "500m",
+ "memory": "1Gi"
+ }
+}
+
+$ kubectl get pod -n demo druid-cluster-historicals-0 -o json | jq '.spec.containers[].resources'
+{
+ "limits": {
+ "memory": "1Gi"
+ },
+ "requests": {
+ "cpu": "500m",
+ "memory": "1Gi"
+ }
+}
+```
+
+You can see from the above outputs that the resources for coordinators and historicals are same as the one we have assigned while deploying the druid.
+
+We are now ready to apply the `DruidAutoscaler` CRO to set up autoscaling for these coordinators and historicals nodes.
+
+### Compute Resource Autoscaling
+
+Here, we are going to set up compute resource autoscaling using a DruidAutoscaler Object.
+
+#### Create DruidAutoscaler Object
+
+In order to set up compute resource autoscaling for this topology cluster, we have to create a `DruidAutoscaler` CRO with our desired configuration. Below is the YAML of the `DruidAutoscaler` object that we are going to create,
+
+```yaml
+apiVersion: autoscaling.kubedb.com/v1alpha1
+kind: DruidAutoscaler
+metadata:
+ name: druid-autoscaler
+ namespace: demo
+spec:
+ databaseRef:
+ name: druid-quickstart
+ compute:
+ coordinators:
+ trigger: "On"
+ podLifeTimeThreshold: 1m
+ minAllowed:
+ cpu: 600m
+ memory: 2Gi
+ maxAllowed:
+ cpu: 1000m
+ memory: 5Gi
+ resourceDiffPercentage: 20
+ controlledResources: ["cpu", "memory"]
+ historicals:
+ trigger: "On"
+ podLifeTimeThreshold: 1m
+ minAllowed:
+ cpu: 600m
+ memory: 2Gi
+ maxAllowed:
+ cpu: 1000m
+ memory: 5Gi
+ resourceDiffPercentage: 20
+ controlledResources: [ "cpu", "memory"]
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing compute resource scaling operation on `druid-cluster` cluster.
+- `spec.compute.coordinators.trigger` specifies that compute autoscaling is enabled for this node.
+- `spec.compute.coordinators.podLifeTimeThreshold` specifies the minimum lifetime for at least one of the pod to initiate a vertical scaling.
+- `spec.compute.coordinators.resourceDiffPercentage` specifies the minimum resource difference in percentage. The default is 10%. If the difference between current & recommended resource is less than ResourceDiffPercentage, Autoscaler Operator will ignore the updating.
+- `spec.compute.coordinators.minAllowed` specifies the minimum allowed resources for the cluster.
+- `spec.compute.coordinators.maxAllowed` specifies the maximum allowed resources for the cluster.
+- `spec.compute.coordinators.controlledResources` specifies the resources that are controlled by the autoscaler.
+- `spec.compute.coordinators.containerControlledValues` specifies which resource values should be controlled. The default is "RequestsAndLimits".
+- `spec.compute.historicals` can be configured the same way shown above.
+- `spec.opsRequestOptions` contains the options to pass to the created OpsRequest. It has 2 fields.
+ - `timeout` specifies the timeout for the OpsRequest.
+ - `apply` specifies when the OpsRequest should be applied. The default is "IfReady".
+
+> **Note:** You can also configure autoscaling configurations for all other nodes as well. You can apply autoscaler for each node in separate YAML or combinedly in one a YAML as shown above.
+
+Let's create the `DruidAutoscaler` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/autoscaler/compute/yamls/druid-autoscaler.yaml
+druidautoscaler.autoscaling.kubedb.com/druid-autoscaler created
+```
+
+#### Verify Autoscaling is set up successfully
+
+Let's check that the `druidautoscaler` resource is created successfully,
+
+```bash
+$ kubectl describe druidautoscaler druid-autoscaler -n demo
+ kubectl describe druidautoscaler druid-autoscaler -n demo
+Name: druid-autoscaler
+Namespace: demo
+Labels:
+Annotations:
+API Version: autoscaling.kubedb.com/v1alpha1
+Kind: DruidAutoscaler
+Metadata:
+ Creation Timestamp: 2024-10-24T10:04:22Z
+ Generation: 1
+ Managed Fields:
+ API Version: autoscaling.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:metadata:
+ f:annotations:
+ .:
+ f:kubectl.kubernetes.io/last-applied-configuration:
+ f:spec:
+ .:
+ f:compute:
+ .:
+ f:coordinators:
+ .:
+ f:controlledResources:
+ f:maxAllowed:
+ .:
+ f:cpu:
+ f:memory:
+ f:minAllowed:
+ .:
+ f:cpu:
+ f:memory:
+ f:podLifeTimeThreshold:
+ f:resourceDiffPercentage:
+ f:trigger:
+ f:historicals:
+ .:
+ f:controlledResources:
+ f:maxAllowed:
+ .:
+ f:cpu:
+ f:memory:
+ f:minAllowed:
+ .:
+ f:cpu:
+ f:memory:
+ f:podLifeTimeThreshold:
+ f:resourceDiffPercentage:
+ f:trigger:
+ f:databaseRef:
+ Manager: kubectl-client-side-apply
+ Operation: Update
+ Time: 2024-10-24T10:04:22Z
+ API Version: autoscaling.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:metadata:
+ f:ownerReferences:
+ .:
+ k:{"uid":"c2a5c29d-3589-49d8-bc18-585b9c05bf8d"}:
+ Manager: kubedb-autoscaler
+ Operation: Update
+ Time: 2024-10-24T10:04:22Z
+ API Version: autoscaling.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:status:
+ .:
+ f:checkpoints:
+ f:conditions:
+ f:vpas:
+ Manager: kubedb-autoscaler
+ Operation: Update
+ Subresource: status
+ Time: 2024-10-24T10:16:20Z
+ Owner References:
+ API Version: kubedb.com/v1alpha2
+ Block Owner Deletion: true
+ Controller: true
+ Kind: Druid
+ Name: druid-cluster
+ UID: c2a5c29d-3589-49d8-bc18-585b9c05bf8d
+ Resource Version: 274969
+ UID: 069fbdd7-87ad-4fd7-acc7-9753fa188312
+Spec:
+ Compute:
+ Coordinators:
+ Controlled Resources:
+ cpu
+ memory
+ Max Allowed:
+ Cpu: 1000m
+ Memory: 5Gi
+ Min Allowed:
+ Cpu: 600m
+ Memory: 2Gi
+ Pod Life Time Threshold: 1m
+ Resource Diff Percentage: 20
+ Trigger: On
+ Historicals:
+ Controlled Resources:
+ cpu
+ memory
+ Max Allowed:
+ Cpu: 1000m
+ Memory: 5Gi
+ Min Allowed:
+ Cpu: 600m
+ Memory: 2Gi
+ Pod Life Time Threshold: 1m
+ Resource Diff Percentage: 20
+ Trigger: On
+ Database Ref:
+ Name: druid-cluster
+Status:
+ Checkpoints:
+ Cpu Histogram:
+ Bucket Weights:
+ Index: 0
+ Weight: 10000
+ Index: 5
+ Weight: 490
+ Reference Timestamp: 2024-10-24T10:05:00Z
+ Total Weight: 2.871430450948392
+ First Sample Start: 2024-10-24T10:05:07Z
+ Last Sample Start: 2024-10-24T10:16:03Z
+ Last Update Time: 2024-10-24T10:16:20Z
+ Memory Histogram:
+ Bucket Weights:
+ Index: 25
+ Weight: 3648
+ Index: 29
+ Weight: 10000
+ Reference Timestamp: 2024-10-24T10:10:00Z
+ Total Weight: 3.3099198846728424
+ Ref:
+ Container Name: druid
+ Vpa Object Name: druid-cluster-historicals
+ Total Samples Count: 12
+ Version: v3
+ Cpu Histogram:
+ Bucket Weights:
+ Index: 0
+ Weight: 3040
+ Index: 1
+ Weight: 10000
+ Index: 2
+ Weight: 3278
+ Index: 14
+ Weight: 1299
+ Reference Timestamp: 2024-10-24T10:10:00Z
+ Total Weight: 1.0092715955023177
+ First Sample Start: 2024-10-24T10:04:53Z
+ Last Sample Start: 2024-10-24T10:14:03Z
+ Last Update Time: 2024-10-24T10:14:20Z
+ Memory Histogram:
+ Bucket Weights:
+ Index: 24
+ Weight: 10000
+ Index: 27
+ Weight: 8706
+ Reference Timestamp: 2024-10-24T10:10:00Z
+ Total Weight: 3.204567438391289
+ Ref:
+ Container Name: druid
+ Vpa Object Name: druid-cluster-coordinators
+ Total Samples Count: 10
+ Version: v3
+ Conditions:
+ Last Transition Time: 2024-10-24T10:07:19Z
+ Message: Successfully created druidOpsRequest demo/drops-druid-cluster-coordinators-g02xtu
+ Observed Generation: 1
+ Reason: CreateOpsRequest
+ Status: True
+ Type: CreateOpsRequest
+ Vpas:
+ Conditions:
+ Last Transition Time: 2024-10-24T10:05:19Z
+ Status: True
+ Type: RecommendationProvided
+ Recommendation:
+ Container Recommendations:
+ Container Name: druid
+ Lower Bound:
+ Cpu: 600m
+ Memory: 2Gi
+ Target:
+ Cpu: 600m
+ Memory: 2Gi
+ Uncapped Target:
+ Cpu: 100m
+ Memory: 764046746
+ Upper Bound:
+ Cpu: 1
+ Memory: 5Gi
+ Vpa Name: druid-cluster-historicals
+ Conditions:
+ Last Transition Time: 2024-10-24T10:06:19Z
+ Status: True
+ Type: RecommendationProvided
+ Recommendation:
+ Container Recommendations:
+ Container Name: druid
+ Lower Bound:
+ Cpu: 600m
+ Memory: 2Gi
+ Target:
+ Cpu: 600m
+ Memory: 2Gi
+ Uncapped Target:
+ Cpu: 100m
+ Memory: 671629701
+ Upper Bound:
+ Cpu: 1
+ Memory: 5Gi
+ Vpa Name: druid-cluster-coordinators
+Events:
+```
+So, the `druidautoscaler` resource is created successfully.
+
+you can see in the `Status.VPAs.Recommendation` section, that recommendation has been generated for our database. Our autoscaler operator continuously watches the recommendation generated and creates an `druidopsrequest` based on the recommendations, if the database pods resources are needed to scaled up or down.
+
+Let's watch the `druidopsrequest` in the demo namespace to see if any `druidopsrequest` object is created. After some time you'll see that a `druidopsrequest` will be created based on the recommendation.
+
+```bash
+$ watch kubectl get druidopsrequest -n demo
+Every 2.0s: kubectl get druidopsrequest -n demo
+NAME TYPE STATUS AGE
+drops-druid-cluster-coordinators-g02xtu VerticalScaling Progressing 8m
+drops-druid-cluster-historicals-g3oqje VerticalScaling Progressing 8m
+
+```
+Progressing
+Let's wait for the ops request to become successful.
+
+```bash
+$ kubectl get druidopsrequest -n demo
+NAME TYPE STATUS AGE
+drops-druid-cluster-coordinators-g02xtu VerticalScaling Successful 12m
+drops-druid-cluster-historicals-g3oqje VerticalScaling Successful 13m
+```
+
+We can see from the above output that the `DruidOpsRequest` has succeeded. If we describe the `DruidOpsRequest` we will get an overview of the steps that were followed to scale the cluster.
+
+```bash
+$ kubectl describe druidopsrequests -n demo drops-druid-cluster-coordinators-f6qbth
+Name: drops-druid-cluster-coordinators-g02xtu
+Namespace: demo
+Labels: app.kubernetes.io/component=database
+ app.kubernetes.io/instance=druid-cluster
+ app.kubernetes.io/managed-by=kubedb.com
+ app.kubernetes.io/name=druids.kubedb.com
+Annotations:
+API Version: ops.kubedb.com/v1alpha1
+Kind: DruidOpsRequest
+Metadata:
+ Creation Timestamp: 2024-10-24T10:07:19Z
+ Generation: 1
+ Managed Fields:
+ API Version: ops.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:metadata:
+ f:labels:
+ .:
+ f:app.kubernetes.io/component:
+ f:app.kubernetes.io/instance:
+ f:app.kubernetes.io/managed-by:
+ f:app.kubernetes.io/name:
+ f:ownerReferences:
+ .:
+ k:{"uid":"069fbdd7-87ad-4fd7-acc7-9753fa188312"}:
+ f:spec:
+ .:
+ f:apply:
+ f:databaseRef:
+ f:type:
+ f:verticalScaling:
+ .:
+ f:coordinators:
+ .:
+ f:resources:
+ .:
+ f:limits:
+ .:
+ f:memory:
+ f:requests:
+ .:
+ f:cpu:
+ f:memory:
+ Manager: kubedb-autoscaler
+ Operation: Update
+ Time: 2024-10-24T10:07:19Z
+ API Version: ops.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:status:
+ .:
+ f:conditions:
+ f:observedGeneration:
+ f:phase:
+ Manager: kubedb-ops-manager
+ Operation: Update
+ Subresource: status
+ Time: 2024-10-24T10:07:43Z
+ Owner References:
+ API Version: autoscaling.kubedb.com/v1alpha1
+ Block Owner Deletion: true
+ Controller: true
+ Kind: DruidAutoscaler
+ Name: druid-autoscaler
+ UID: 069fbdd7-87ad-4fd7-acc7-9753fa188312
+ Resource Version: 273990
+ UID: d14d964b-f4ae-4570-a296-38e91c802473
+Spec:
+ Apply: IfReady
+ Database Ref:
+ Name: druid-cluster
+ Type: VerticalScaling
+ Vertical Scaling:
+ Coordinators:
+ Resources:
+ Limits:
+ Memory: 2Gi
+ Requests:
+ Cpu: 600m
+ Memory: 2Gi
+Status:
+ Conditions:
+ Last Transition Time: 2024-10-24T10:07:19Z
+ Message: Druid ops-request has started to vertically scale the Druid nodes
+ Observed Generation: 1
+ Reason: VerticalScaling
+ Status: True
+ Type: VerticalScaling
+ Last Transition Time: 2024-10-24T10:07:28Z
+ Message: Successfully updated PetSets Resources
+ Observed Generation: 1
+ Reason: UpdatePetSets
+ Status: True
+ Type: UpdatePetSets
+ Last Transition Time: 2024-10-24T10:07:43Z
+ Message: Successfully Restarted Pods With Resources
+ Observed Generation: 1
+ Reason: RestartPods
+ Status: True
+ Type: RestartPods
+ Last Transition Time: 2024-10-24T10:07:33Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-coordinators-0
+ Last Transition Time: 2024-10-24T10:07:33Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-coordinators-0
+ Last Transition Time: 2024-10-24T10:07:38Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-coordinators-0
+ Last Transition Time: 2024-10-24T10:07:43Z
+ Message: Successfully completed the vertical scaling for RabbitMQ
+ Observed Generation: 1
+ Reason: Successful
+ Status: True
+ Type: Successful
+ Observed Generation: 1
+ Phase: Successful
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Starting 12m KubeDB Ops-manager Operator Start processing for DruidOpsRequest: demo/drops-druid-cluster-coordinators-g02xtu
+ Normal Starting 12m KubeDB Ops-manager Operator Pausing Druid databse: demo/druid-cluster
+ Normal Successful 12m KubeDB Ops-manager Operator Successfully paused Druid database: demo/druid-cluster for DruidOpsRequest: drops-druid-cluster-coordinators-g02xtu
+ Normal UpdatePetSets 12m KubeDB Ops-manager Operator Successfully updated PetSets Resources
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0 12m KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0 12m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-coordinators-0 12m KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Normal RestartPods 12m KubeDB Ops-manager Operator Successfully Restarted Pods With Resources
+ Normal Starting 12m KubeDB Ops-manager Operator Resuming Druid database: demo/druid-cluster
+ Normal Successful 12m KubeDB Ops-manager Operator Successfully resumed Druid database: demo/druid-cluster for DruidOpsRequest: drops-druid-cluster-coordinators-g02xtu
+```
+
+Let's describe the other `DruidOpsRequest` created for scaling of historicals.
+
+```bash
+$ kubectl describe druidopsrequests -n demo drops-druid-cluster-historicals-g3oqje
+Name: drops-druid-cluster-historicals-g3oqje
+Namespace: demo
+Labels: app.kubernetes.io/component=database
+ app.kubernetes.io/instance=druid-cluster
+ app.kubernetes.io/managed-by=kubedb.com
+ app.kubernetes.io/name=druids.kubedb.com
+Annotations:
+API Version: ops.kubedb.com/v1alpha1
+Kind: DruidOpsRequest
+Metadata:
+ Creation Timestamp: 2024-10-24T10:06:19Z
+ Generation: 1
+ Managed Fields:
+ API Version: ops.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:metadata:
+ f:labels:
+ .:
+ f:app.kubernetes.io/component:
+ f:app.kubernetes.io/instance:
+ f:app.kubernetes.io/managed-by:
+ f:app.kubernetes.io/name:
+ f:ownerReferences:
+ .:
+ k:{"uid":"069fbdd7-87ad-4fd7-acc7-9753fa188312"}:
+ f:spec:
+ .:
+ f:apply:
+ f:databaseRef:
+ f:type:
+ f:verticalScaling:
+ .:
+ f:historicals:
+ .:
+ f:resources:
+ .:
+ f:limits:
+ .:
+ f:memory:
+ f:requests:
+ .:
+ f:cpu:
+ f:memory:
+ Manager: kubedb-autoscaler
+ Operation: Update
+ Time: 2024-10-24T10:06:19Z
+ API Version: ops.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:status:
+ .:
+ f:conditions:
+ f:observedGeneration:
+ f:phase:
+ Manager: kubedb-ops-manager
+ Operation: Update
+ Subresource: status
+ Time: 2024-10-24T10:06:37Z
+ Owner References:
+ API Version: autoscaling.kubedb.com/v1alpha1
+ Block Owner Deletion: true
+ Controller: true
+ Kind: DruidAutoscaler
+ Name: druid-autoscaler
+ UID: 069fbdd7-87ad-4fd7-acc7-9753fa188312
+ Resource Version: 273770
+ UID: fc13624c-42d4-4b03-9448-80f451b1a888
+Spec:
+ Apply: IfReady
+ Database Ref:
+ Name: druid-cluster
+ Type: VerticalScaling
+ Vertical Scaling:
+ Historicals:
+ Resources:
+ Limits:
+ Memory: 2Gi
+ Requests:
+ Cpu: 600m
+ Memory: 2Gi
+Status:
+ Conditions:
+ Last Transition Time: 2024-10-24T10:06:19Z
+ Message: Druid ops-request has started to vertically scale the Druid nodes
+ Observed Generation: 1
+ Reason: VerticalScaling
+ Status: True
+ Type: VerticalScaling
+ Last Transition Time: 2024-10-24T10:06:22Z
+ Message: Successfully updated PetSets Resources
+ Observed Generation: 1
+ Reason: UpdatePetSets
+ Status: True
+ Type: UpdatePetSets
+ Last Transition Time: 2024-10-24T10:06:37Z
+ Message: Successfully Restarted Pods With Resources
+ Observed Generation: 1
+ Reason: RestartPods
+ Status: True
+ Type: RestartPods
+ Last Transition Time: 2024-10-24T10:06:27Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-historicals-0
+ Last Transition Time: 2024-10-24T10:06:27Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-historicals-0
+ Last Transition Time: 2024-10-24T10:06:32Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-historicals-0
+ Last Transition Time: 2024-10-24T10:06:37Z
+ Message: Successfully completed the vertical scaling for RabbitMQ
+ Observed Generation: 1
+ Reason: Successful
+ Status: True
+ Type: Successful
+ Observed Generation: 1
+ Phase: Successful
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Starting 16m KubeDB Ops-manager Operator Start processing for DruidOpsRequest: demo/drops-druid-cluster-historicals-g3oqje
+ Normal Starting 16m KubeDB Ops-manager Operator Pausing Druid databse: demo/druid-cluster
+ Normal Successful 16m KubeDB Ops-manager Operator Successfully paused Druid database: demo/druid-cluster for DruidOpsRequest: drops-druid-cluster-historicals-g3oqje
+ Normal UpdatePetSets 16m KubeDB Ops-manager Operator Successfully updated PetSets Resources
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-historicals-0 16m KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-historicals-0 16m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-historicals-0 16m KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Normal RestartPods 16m KubeDB Ops-manager Operator Successfully Restarted Pods With Resources
+ Normal Starting 16m KubeDB Ops-manager Operator Resuming Druid database: demo/druid-cluster
+ Normal Successful 16m KubeDB Ops-manager Operator Successfully resumed Druid database: demo/druid-cluster for DruidOpsRequest: drops-druid-cluster-historicals-g3oqje
+
+```
+
+Now, we are going to verify from the Pod, and the Druid yaml whether the resources of the coordinators and historicals node has updated to meet up the desired state, Let's check,
+
+```bash
+$ kubectl get pod -n demo druid-cluster-coordinators-0 -o json | jq '.spec.containers[].resources'
+{
+ "limits": {
+ "memory": "1536Mi"
+ },
+ "requests": {
+ "cpu": "600m",
+ "memory": "1536Mi"
+ }
+}
+
+$ kubectl get pod -n demo druid-cluster-historicals-0 -o json | jq '.spec.containers[].resources'
+{
+ "limits": {
+ "memory": "2Gi"
+ },
+ "requests": {
+ "cpu": "600m",
+ "memory": "2Gi"
+ }
+}
+
+$ kubectl get druid -n demo druid-cluster -o json | jq '.spec.topology.coordinators.podTemplate.spec.containers[].resources'
+{
+ "limits": {
+ "memory": "1536Mi"
+ },
+ "requests": {
+ "cpu": "600m",
+ "memory": "1536Mi"
+ }
+}
+
+$ kubectl get druid -n demo druid-cluster -o json | jq '.spec.topology.historicals.podTemplate.spec.containers[].resources'
+{
+ "limits": {
+ "memory": "2Gi"
+ },
+ "requests": {
+ "cpu": "600m",
+ "memory": "2Gi"
+ }
+}
+```
+
+The above output verifies that we have successfully auto scaled the resources of the Druid topology cluster for coordinators and historicals.
+
+## Cleaning Up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+```bash
+kubectl delete druidopsrequest -n demo drops-druid-cluster-coordinators-g02xtu drops-druid-cluster-historicals-g3oqje
+kubectl delete druidautoscaler -n demo druid-autoscaler
+kubectl delete dr -n demo druid-cluster
+kubectl delete ns demo
+```
+## Next Steps
+
+- Detail concepts of [Druid object](/docs/guides/druid/concepts/druid.md).
+- Different Druid topology clustering modes [here](/docs/guides/druid/clustering/_index.md).
+- Monitor your Druid database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/druid/monitoring/using-prometheus-operator.md).
+
+[//]: # (- Monitor your Druid database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/druid/monitoring/using-builtin-prometheus.md).)
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
diff --git a/docs/guides/druid/autoscaler/compute/images/compute-autoscaling.png b/docs/guides/druid/autoscaler/compute/images/compute-autoscaling.png
new file mode 100644
index 0000000000..9406e2b2e9
Binary files /dev/null and b/docs/guides/druid/autoscaler/compute/images/compute-autoscaling.png differ
diff --git a/docs/guides/druid/autoscaler/compute/overview.md b/docs/guides/druid/autoscaler/compute/overview.md
new file mode 100644
index 0000000000..457bbfe62c
--- /dev/null
+++ b/docs/guides/druid/autoscaler/compute/overview.md
@@ -0,0 +1,55 @@
+---
+title: Druid Compute Autoscaling Overview
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-autoscaler-compute-overview
+ name: Overview
+ parent: guides-druid-autoscaler-compute
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Druid Compute Resource Autoscaling
+
+This guide will give an overview on how KubeDB Autoscaler operator autoscales the database compute resources i.e. cpu and memory using `druidautoscaler` crd.
+
+## Before You Begin
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [Druid](/docs/guides/druid/concepts/druid.md)
+ - [DruidAutoscaler](/docs/guides/druid/concepts/druidautoscaler.md)
+ - [DruidOpsRequest](/docs/guides/druid/concepts/druidopsrequest.md)
+
+## How Compute Autoscaling Works
+
+The following diagram shows how KubeDB Autoscaler operator autoscales the resources of `Druid` database components. Open the image in a new tab to see the enlarged version.
+
+
+
+The Auto Scaling process consists of the following steps:
+
+1. At first, a user creates a `Druid` Custom Resource Object (CRO).
+
+2. `KubeDB` Provisioner operator watches the `Druid` CRO.
+
+3. When the operator finds a `Druid` CRO, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc.
+
+4. Then, in order to set up autoscaling of the various components (ie. Coordinators, Overlords, Historicals, MiddleManagers, Brokers, Routers) of the `Druid` cluster the user creates a `DruidAutoscaler` CRO with desired configuration.
+
+5. `KubeDB` Autoscaler operator watches the `DruidAutoscaler` CRO.
+
+6. `KubeDB` Autoscaler operator generates recommendation using the modified version of kubernetes [official recommender](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler/pkg/recommender) for different components of the database, as specified in the `DruidAutoscaler` CRO.
+
+7. If the generated recommendation doesn't match the current resources of the database, then `KubeDB` Autoscaler operator creates a `DruidOpsRequest` CRO to scale the database to match the recommendation generated.
+
+8. `KubeDB` Ops-manager operator watches the `DruidOpsRequest` CRO.
+
+9. Then the `KubeDB` Ops-manager operator will scale the database component vertically as specified on the `DruidOpsRequest` CRO.
+
+In the next docs, we are going to show a step-by-step guide on Autoscaling of various Druid database components using `DruidAutoscaler` CRD.
diff --git a/docs/guides/druid/autoscaler/compute/yamls/deep-storage-config.yaml b/docs/guides/druid/autoscaler/compute/yamls/deep-storage-config.yaml
new file mode 100644
index 0000000000..3612595828
--- /dev/null
+++ b/docs/guides/druid/autoscaler/compute/yamls/deep-storage-config.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+
diff --git a/docs/guides/druid/autoscaler/compute/yamls/druid-autoscaler.yaml b/docs/guides/druid/autoscaler/compute/yamls/druid-autoscaler.yaml
new file mode 100644
index 0000000000..ef5d96ec78
--- /dev/null
+++ b/docs/guides/druid/autoscaler/compute/yamls/druid-autoscaler.yaml
@@ -0,0 +1,31 @@
+apiVersion: autoscaling.kubedb.com/v1alpha1
+kind: DruidAutoscaler
+metadata:
+ name: druid-storage-autoscaler
+ namespace: demo
+spec:
+ databaseRef:
+ name: druid-cluster
+ compute:
+ coordinators:
+ trigger: "On"
+ podLifeTimeThreshold: 1m
+ minAllowed:
+ cpu: 600m
+ memory: 2Gi
+ maxAllowed:
+ cpu: 1000m
+ memory: 5Gi
+ resourceDiffPercentage: 20
+ controlledResources: ["cpu", "memory"]
+ historicals:
+ trigger: "On"
+ podLifeTimeThreshold: 1m
+ minAllowed:
+ cpu: 600m
+ memory: 2Gi
+ maxAllowed:
+ cpu: 1000m
+ memory: 5Gi
+ resourceDiffPercentage: 20
+ controlledResources: [ "cpu", "memory"]
diff --git a/docs/guides/druid/autoscaler/compute/yamls/druid-cluster.yaml b/docs/guides/druid/autoscaler/compute/yamls/druid-cluster.yaml
new file mode 100644
index 0000000000..ffac2b300b
--- /dev/null
+++ b/docs/guides/druid/autoscaler/compute/yamls/druid-cluster.yaml
@@ -0,0 +1,31 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ historicals:
+ replicas: 1
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ middleManagers:
+ replicas: 1
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ routers:
+ replicas: 1
+ deletionPolicy: Delete
diff --git a/docs/guides/druid/autoscaler/storage/_index.md b/docs/guides/druid/autoscaler/storage/_index.md
new file mode 100644
index 0000000000..e197ec8429
--- /dev/null
+++ b/docs/guides/druid/autoscaler/storage/_index.md
@@ -0,0 +1,10 @@
+---
+title: Storage Autoscaling
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-autoscaler-storage
+ name: Storage Autoscaling
+ parent: guides-druid-autoscaler
+ weight: 46
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/druid/autoscaler/storage/guide.md b/docs/guides/druid/autoscaler/storage/guide.md
new file mode 100644
index 0000000000..02b3571625
--- /dev/null
+++ b/docs/guides/druid/autoscaler/storage/guide.md
@@ -0,0 +1,896 @@
+---
+title: Druid Topology Autoscaling
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-autoscaler-storage-guide
+ name: Druid Storage Autoscaling
+ parent: guides-druid-autoscaler-storage
+ weight: 20
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Storage Autoscaling of a Druid Topology Cluster
+
+This guide will show you how to use `KubeDB` to autoscale the storage of a Druid Topology cluster.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster.
+
+- Install `KubeDB` Provisioner, Ops-manager and Autoscaler operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation)
+
+- Install Prometheus from [here](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack)
+
+- You must have a `StorageClass` that supports volume expansion.
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [Druid](/docs/guides/druid/concepts/druid.md)
+ - [DruidAutoscaler](/docs/guides/druid/concepts/druidautoscaler.md)
+ - [DruidOpsRequest](/docs/guides/druid/concepts/druidopsrequest.md)
+ - [Storage Autoscaling Overview](/docs/guides/druid/autoscaler/storage/overview.md)
+
+To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial.
+
+```bash
+$ kubectl create ns demo
+namespace/demo created
+```
+
+> **Note:** YAML files used in this tutorial are stored in [docs/examples/druid](/docs/examples/druid) directory of [kubedb/docs](https://github.com/kubedb/docs) repository.
+
+## Storage Autoscaling of Topology Cluster
+
+At first verify that your cluster has a storage class, that supports volume expansion. Let's check,
+
+```bash
+$ kubectl get storageclass
+NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
+local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 28h
+longhorn (default) driver.longhorn.io Delete Immediate true 28h
+longhorn-static driver.longhorn.io Delete Immediate true 28h
+```
+
+We can see from the output the `longhorn` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it.
+
+Now, we are going to deploy a `Druid` topology using a supported version by `KubeDB` operator. Then we are going to apply `DruidAutoscaler` to set up autoscaling.
+
+### Create External Dependency (Deep Storage)
+
+Before proceeding further, we need to prepare deep storage, which is one of the external dependency of Druid and used for storing the segments. It is a storage mechanism that Apache Druid does not provide. **Amazon S3**, **Google Cloud Storage**, or **Azure Blob Storage**, **S3-compatible storage** (like **Minio**), or **HDFS** are generally convenient options for deep storage.
+
+In this tutorial, we will run a `minio-server` as deep storage in our local `kind` cluster using `minio-operator` and create a bucket named `druid` in it, which the deployed druid database will use.
+
+```bash
+$ helm repo add minio https://operator.min.io/
+$ helm repo update minio
+$ helm upgrade --install --namespace "minio-operator" --create-namespace "minio-operator" minio/operator --set operator.replicaCount=1
+
+$ helm upgrade --install --namespace "demo" --create-namespace druid-minio minio/tenant \
+--set tenant.pools[0].servers=1 \
+--set tenant.pools[0].volumesPerServer=1 \
+--set tenant.pools[0].size=1Gi \
+--set tenant.certificate.requestAutoCert=false \
+--set tenant.buckets[0].name="druid" \
+--set tenant.pools[0].name="default"
+
+```
+
+Now we need to create a `Secret` named `deep-storage-config`. It contains the necessary connection information using which the druid database will connect to the deep storage.
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+```
+
+Let’s create the `deep-storage-config` Secret shown above:
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/autoscaler/storage/yamls/deep-storage-config.yaml
+secret/deep-storage-config created
+```
+
+### Deploy Druid Cluster
+
+In this section, we are going to deploy a Druid topology cluster with monitoring enabled and with version `28.0.1`. Then, in the next section we will set up autoscaling for this cluster using `DruidAutoscaler` CRD. Below is the YAML of the `Druid` CR that we are going to create,
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ historicals:
+ replicas: 1
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ storageType: Durable
+ middleManagers:
+ replicas: 1
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ storageType: Durable
+ routers:
+ replicas: 1
+ deletionPolicy: Delete
+ monitor:
+ agent: prometheus.io/operator
+ prometheus:
+ serviceMonitor:
+ labels:
+ release: prometheus
+ interval: 10s
+```
+
+Let's create the `Druid` CRO we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/autoscaler/storage/yamls/druid-cluster.yaml
+druid.kubedb.com/druid-cluster created
+```
+
+Now, wait until `druid-cluster` has status `Ready`. i.e,
+
+```bash
+$ kubectl get dr -n demo -w
+NAME TYPE VERSION STATUS AGE
+druid-cluster kubedb.com/v1alpha2 28.0.1 Provisioning 0s
+druid-cluster kubedb.com/v1alpha2 28.0.1 Provisioning 24s
+.
+.
+druid-cluster kubedb.com/v1alpha2 28.0.1 Ready 2m20s
+```
+
+Let's check volume size from petset, and from the persistent volume,
+
+```bash
+$ kubectl get petset -n demo druid-cluster-historicals -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
+"1Gi"
+$ kubectl get petset -n demo druid-cluster-middleManagers -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
+"1Gi"
+$ kubectl get pv -n demo
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
+pvc-2c0ef2aa-0438-4d75-9cb2-c12a176bae6a 1Gi RWO Delete Bound demo/druid-cluster-base-task-dir-druid-cluster-middlemanagers-0 longhorn 95s
+pvc-5f4cea5f-e0c8-4339-b67c-9cb8b02ba49d 1Gi RWO Delete Bound demo/druid-cluster-segment-cache-druid-cluster-historicals-0 longhorn 96s
+```
+
+You can see the petset for both historicals and middleManagers has 1GB storage, and the capacity of all the persistent volume is also 1GB.
+
+We are now ready to apply the `DruidAutoscaler` CRO to set up storage autoscaling for this cluster(historicals and middleManagers).
+
+### Storage Autoscaling
+
+Here, we are going to set up storage autoscaling using a DruidAutoscaler Object.
+
+#### Create DruidAutoscaler Object
+
+In order to set up vertical autoscaling for this topology cluster, we have to create a `DruidAutoscaler` CRO with our desired configuration. Below is the YAML of the `DruidAutoscaler` object that we are going to create,
+
+```yaml
+apiVersion: autoscaling.kubedb.com/v1alpha1
+kind: DruidAutoscaler
+metadata:
+ name: druid-storage-autoscaler
+ namespace: demo
+spec:
+ databaseRef:
+ name: druid-cluster
+ storage:
+ historicals:
+ expansionMode: "Offline"
+ trigger: "On"
+ usageThreshold: 60
+ scalingThreshold: 100
+ middleManagers:
+ expansionMode: "Offline"
+ trigger: "On"
+ usageThreshold: 60
+ scalingThreshold: 100
+```
+
+Here,
+
+- `spec.clusterRef.name` specifies that we are performing vertical scaling operation on `druid-cluster` cluster.
+- `spec.storage.historicals.trigger/spec.storage.middleManagers.trigger` specifies that storage autoscaling is enabled for historicals and middleManagers of topology cluster.
+- `spec.storage.historicals.usageThreshold/spec.storage.middleManagers.usageThreshold` specifies storage usage threshold, if storage usage exceeds `60%` then storage autoscaling will be triggered.
+- `spec.storage.historicals.scalingThreshold/spec.storage.historicals.scalingThreshold` specifies the scaling threshold. Storage will be scaled to `100%` of the current amount.
+- It has another field `spec.storage.historicals.expansionMode/spec.storage.middleManagers.expansionMode` to set the opsRequest volumeExpansionMode, which support two values: `Online` & `Offline`. Default value is `Online`.
+
+Let's create the `DruidAutoscaler` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/autoscaler/storage/yamls/druid-storage-autoscaler.yaml
+druidautoscaler.autoscaling.kubedb.com/druid-storage-autoscaler created
+```
+
+#### Storage Autoscaling is set up successfully
+
+Let's check that the `druidautoscaler` resource is created successfully,
+
+```bash
+$ kubectl get druidautoscaler -n demo
+NAME AGE
+druid-storage-autoscaler 34s
+
+$ kubectl describe druidautoscaler -n demo druid-storage-autoscaler
+Name: druid-storage-autoscaler
+Namespace: demo
+Labels:
+Annotations:
+API Version: autoscaling.kubedb.com/v1alpha1
+Kind: DruidAutoscaler
+Metadata:
+ Creation Timestamp: 2024-10-25T09:52:37Z
+ Generation: 1
+ Managed Fields:
+ API Version: autoscaling.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:metadata:
+ f:annotations:
+ .:
+ f:kubectl.kubernetes.io/last-applied-configuration:
+ f:spec:
+ .:
+ f:databaseRef:
+ f:storage:
+ .:
+ f:historicals:
+ .:
+ f:expansionMode:
+ f:scalingThreshold:
+ f:trigger:
+ f:usageThreshold:
+ f:middleManagers:
+ .:
+ f:expansionMode:
+ f:scalingThreshold:
+ f:trigger:
+ f:usageThreshold:
+ Manager: kubectl-client-side-apply
+ Operation: Update
+ Time: 2024-10-25T09:52:37Z
+ API Version: autoscaling.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:metadata:
+ f:ownerReferences:
+ .:
+ k:{"uid":"712730e8-41ef-4700-b184-825b30ecbc8c"}:
+ Manager: kubedb-autoscaler
+ Operation: Update
+ Time: 2024-10-25T09:52:37Z
+ Owner References:
+ API Version: kubedb.com/v1alpha2
+ Block Owner Deletion: true
+ Controller: true
+ Kind: Druid
+ Name: druid-cluster
+ UID: 712730e8-41ef-4700-b184-825b30ecbc8c
+ Resource Version: 226662
+ UID: 57cbd906-a9b7-4649-bfe0-304840bb60c1
+Spec:
+ Database Ref:
+ Name: druid-cluster
+ Ops Request Options:
+ Apply: IfReady
+ Storage:
+ Historicals:
+ Expansion Mode: Offline
+ Scaling Rules:
+ Applies Upto:
+ Threshold: 100pc
+ Scaling Threshold: 100
+ Trigger: On
+ Usage Threshold: 60
+ Middle Managers:
+ Expansion Mode: Offline
+ Scaling Rules:
+ Applies Upto:
+ Threshold: 100pc
+ Scaling Threshold: 100
+ Trigger: On
+ Usage Threshold: 60
+Events:
+```
+So, the `druidautoscaler` resource is created successfully.
+
+Now, for this demo, we are going to manually fill up the persistent volume to exceed the `usageThreshold` using `dd` command to see if storage autoscaling is working or not.
+
+We are autoscaling volume for both historicals and middleManagers. So we need to fill up the persistent volume for both historicals and middleManagers.
+
+1. Lets exec into the historicals pod and fill the cluster volume using the following commands:
+
+```bash
+$ kubectl exec -it -n demo druid-cluster-historicals-0 -- bash
+bash-5.1$ df -h /druid/data/segments
+Filesystem Size Used Available Use% Mounted on
+/dev/longhorn/pvc-d4ef15ef-b1af-4a1f-ad25-ad9bc990a2fb 973.4M 92.0K 957.3M 0% /druid/data/segment
+
+bash-5.1$ dd if=/dev/zero of=/druid/data/segments/file.img bs=600M count=1
+1+0 records in
+1+0 records out
+629145600 bytes (600.0MB) copied, 46.709228 seconds, 12.8MB/s
+
+bash-5.1$ df -h /druid/data/segments
+Filesystem Size Used Available Use% Mounted on
+/dev/longhorn/pvc-d4ef15ef-b1af-4a1f-ad25-ad9bc990a2fb 973.4M 600.1M 357.3M 63% /druid/data/segments
+```
+
+2. Let's exec into the middleManagers pod and fill the cluster volume using the following commands:
+
+```bash
+$ kubectl exec -it -n demo druid-cluster-middleManagers-0 -- bash
+druid@druid-cluster-middleManagers-0:~$ df -h /var/druid/task
+Filesystem Size Used Available Use% Mounted on
+/dev/longhorn/pvc-2c0ef2aa-0438-4d75-9cb2-c12a176bae6a 973.4M 24.0K 957.4M 0% /var/druid/task
+druid@druid-cluster-middleManagers-0:~$ dd if=/dev/zero of=/var/druid/task/file.img bs=600M count=1
+1+0 records in
+1+0 records out
+629145600 bytes (629 MB, 600 MiB) copied, 3.39618 s, 185 MB/s
+druid@druid-cluster-middleManagers-0:~$ df -h /var/druid/task
+Filesystem Size Used Available Use% Mounted on
+/dev/longhorn/pvc-2c0ef2aa-0438-4d75-9cb2-c12a176bae6a 973.4M 600.0M 357.4M 63% /var/druid/task
+```
+
+So, from the above output we can see that the storage usage is 63% for both nodes, which exceeded the `usageThreshold` 60%.
+
+There will be two `DruidOpsRequest` created for both historicals and middleManagers to expand the volume of the cluster for both nodes.
+Let's watch the `druidopsrequest` in the demo namespace to see if any `druidopsrequest` object is created. After some time you'll see that a `druidopsrequest` of type `VolumeExpansion` will be created based on the `scalingThreshold`.
+
+```bash
+$ watch kubectl get druidopsrequest -n demo
+NAME TYPE STATUS AGE
+druidopsrequest.ops.kubedb.com/drops-druid-cluster-gq9huj VolumeExpansion Progressing 46s
+druidopsrequest.ops.kubedb.com/drops-druid-cluster-kbw4fd VolumeExpansion Successful 4m46s
+```
+
+Once ops request has succeeded. Let's wait for the other one to become successful.
+
+```bash
+$ kubectl get druidopsrequest -n demo
+NAME TYPE STATUS AGE
+druidopsrequest.ops.kubedb.com/drops-druid-cluster-gq9huj VolumeExpansion Successful 3m18s
+druidopsrequest.ops.kubedb.com/drops-druid-cluster-kbw4fd VolumeExpansion Successful 7m18s
+```
+
+We can see from the above output that the both `DruidOpsRequest` has succeeded. If we describe the `DruidOpsRequest` one by one we will get an overview of the steps that were followed to expand the volume of the cluster.
+
+```bash
+$ kubectl describe druidopsrequest -n demo drops-druid-cluster-kbw4fd
+Name: drops-druid-cluster-kbw4fd
+Namespace: demo
+Labels: app.kubernetes.io/component=database
+ app.kubernetes.io/instance=druid-cluster
+ app.kubernetes.io/managed-by=kubedb.com
+ app.kubernetes.io/name=druids.kubedb.com
+Annotations:
+API Version: ops.kubedb.com/v1alpha1
+Kind: DruidOpsRequest
+Metadata:
+ Creation Timestamp: 2024-10-25T09:57:14Z
+ Generation: 1
+ Managed Fields:
+ API Version: ops.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:metadata:
+ f:labels:
+ .:
+ f:app.kubernetes.io/component:
+ f:app.kubernetes.io/instance:
+ f:app.kubernetes.io/managed-by:
+ f:app.kubernetes.io/name:
+ f:ownerReferences:
+ .:
+ k:{"uid":"57cbd906-a9b7-4649-bfe0-304840bb60c1"}:
+ f:spec:
+ .:
+ f:apply:
+ f:databaseRef:
+ f:type:
+ f:volumeExpansion:
+ .:
+ f:historicals:
+ f:mode:
+ Manager: kubedb-autoscaler
+ Operation: Update
+ Time: 2024-10-25T09:57:14Z
+ API Version: ops.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:status:
+ .:
+ f:conditions:
+ f:observedGeneration:
+ f:phase:
+ Manager: kubedb-ops-manager
+ Operation: Update
+ Subresource: status
+ Time: 2024-10-25T10:00:20Z
+ Owner References:
+ API Version: autoscaling.kubedb.com/v1alpha1
+ Block Owner Deletion: true
+ Controller: true
+ Kind: DruidAutoscaler
+ Name: druid-storage-autoscaler
+ UID: 57cbd906-a9b7-4649-bfe0-304840bb60c1
+ Resource Version: 228016
+ UID: 1fa750bb-2db3-4684-a7cf-1b3047bc07af
+Spec:
+ Apply: IfReady
+ Database Ref:
+ Name: druid-cluster
+ Type: VolumeExpansion
+ Volume Expansion:
+ Historicals: 2041405440
+ Mode: Offline
+Status:
+ Conditions:
+ Last Transition Time: 2024-10-25T09:57:14Z
+ Message: Druid ops-request has started to expand volume of druid nodes.
+ Observed Generation: 1
+ Reason: VolumeExpansion
+ Status: True
+ Type: VolumeExpansion
+ Last Transition Time: 2024-10-25T09:57:22Z
+ Message: get pet set; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: GetPetSet
+ Last Transition Time: 2024-10-25T09:57:22Z
+ Message: is pet set deleted; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: IsPetSetDeleted
+ Last Transition Time: 2024-10-25T09:57:32Z
+ Message: successfully deleted the petSets with orphan propagation policy
+ Observed Generation: 1
+ Reason: OrphanPetSetPods
+ Status: True
+ Type: OrphanPetSetPods
+ Last Transition Time: 2024-10-25T09:57:37Z
+ Message: get pod; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: GetPod
+ Last Transition Time: 2024-10-25T09:57:37Z
+ Message: is ops req patched; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: IsOpsReqPatched
+ Last Transition Time: 2024-10-25T09:57:37Z
+ Message: create pod; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: CreatePod
+ Last Transition Time: 2024-10-25T09:57:42Z
+ Message: get pvc; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: GetPvc
+ Last Transition Time: 2024-10-25T09:57:42Z
+ Message: is pvc patched; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: IsPvcPatched
+ Last Transition Time: 2024-10-25T09:59:27Z
+ Message: compare storage; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: CompareStorage
+ Last Transition Time: 2024-10-25T09:59:27Z
+ Message: create; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: Create
+ Last Transition Time: 2024-10-25T09:59:35Z
+ Message: is druid running; ConditionStatus:False
+ Observed Generation: 1
+ Status: False
+ Type: IsDruidRunning
+ Last Transition Time: 2024-10-25T09:59:57Z
+ Message: successfully updated historicals node PVC sizes
+ Observed Generation: 1
+ Reason: UpdateHistoricalsNodePVCs
+ Status: True
+ Type: UpdateHistoricalsNodePVCs
+ Last Transition Time: 2024-10-25T10:00:15Z
+ Message: successfully reconciled the Druid resources
+ Observed Generation: 1
+ Reason: UpdatePetSets
+ Status: True
+ Type: UpdatePetSets
+ Last Transition Time: 2024-10-25T10:00:20Z
+ Message: PetSet is recreated
+ Observed Generation: 1
+ Reason: ReadyPetSets
+ Status: True
+ Type: ReadyPetSets
+ Last Transition Time: 2024-10-25T10:00:20Z
+ Message: Successfully completed volumeExpansion for Druid
+ Observed Generation: 1
+ Reason: Successful
+ Status: True
+ Type: Successful
+ Observed Generation: 1
+ Phase: Successful
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Starting 8m29s KubeDB Ops-manager Operator Start processing for DruidOpsRequest: demo/drops-druid-cluster-kbw4fd
+ Normal Starting 8m29s KubeDB Ops-manager Operator Pausing Druid databse: demo/druid-cluster
+ Normal Successful 8m29s KubeDB Ops-manager Operator Successfully paused Druid database: demo/druid-cluster for DruidOpsRequest: drops-druid-cluster-kbw4fd
+ Warning get pet set; ConditionStatus:True 8m21s KubeDB Ops-manager Operator get pet set; ConditionStatus:True
+ Warning is pet set deleted; ConditionStatus:True 8m21s KubeDB Ops-manager Operator is pet set deleted; ConditionStatus:True
+ Warning get pet set; ConditionStatus:True 8m16s KubeDB Ops-manager Operator get pet set; ConditionStatus:True
+ Normal OrphanPetSetPods 8m11s KubeDB Ops-manager Operator successfully deleted the petSets with orphan propagation policy
+ Warning get pod; ConditionStatus:True 8m6s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning is ops req patched; ConditionStatus:True 8m6s KubeDB Ops-manager Operator is ops req patched; ConditionStatus:True
+ Warning create pod; ConditionStatus:True 8m6s KubeDB Ops-manager Operator create pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 8m1s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 8m1s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning is pvc patched; ConditionStatus:True 8m1s KubeDB Ops-manager Operator is pvc patched; ConditionStatus:True
+ Warning compare storage; ConditionStatus:False 8m1s KubeDB Ops-manager Operator compare storage; ConditionStatus:False
+ Warning get pod; ConditionStatus:True 7m56s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 7m56s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 7m51s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 7m51s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 7m46s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 7m46s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 7m41s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 7m41s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 7m36s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 7m36s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 7m31s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 7m31s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 7m26s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 7m26s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 7m21s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 7m21s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 7m16s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 7m16s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 7m11s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 7m11s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 7m6s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 7m6s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 7m1s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 7m1s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 6m56s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 6m56s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 6m51s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 6m51s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 6m46s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 6m46s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 6m41s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 6m41s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 6m36s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 6m36s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 6m31s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 6m31s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 6m26s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 6m26s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 6m21s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 6m21s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 6m16s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 6m16s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning compare storage; ConditionStatus:True 6m16s KubeDB Ops-manager Operator compare storage; ConditionStatus:True
+ Warning create; ConditionStatus:True 6m16s KubeDB Ops-manager Operator create; ConditionStatus:True
+ Warning is ops req patched; ConditionStatus:True 6m16s KubeDB Ops-manager Operator is ops req patched; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 6m11s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning is druid running; ConditionStatus:False 6m8s KubeDB Ops-manager Operator is druid running; ConditionStatus:False
+ Warning get pod; ConditionStatus:True 6m6s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 6m1s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 5m56s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 5m51s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Normal UpdateHistoricalsNodePVCs 5m46s KubeDB Ops-manager Operator successfully updated historicals node PVC sizes
+ Normal UpdatePetSets 5m28s KubeDB Ops-manager Operator successfully reconciled the Druid resources
+ Warning get pet set; ConditionStatus:True 5m23s KubeDB Ops-manager Operator get pet set; ConditionStatus:True
+ Normal ReadyPetSets 5m23s KubeDB Ops-manager Operator PetSet is recreated
+ Normal Starting 5m23s KubeDB Ops-manager Operator Resuming Druid database: demo/druid-cluster
+ Normal Successful 5m23s KubeDB Ops-manager Operator Successfully resumed Druid database: demo/druid-cluster for DruidOpsRequest: drops-druid-cluster-kbw4fd
+ Normal UpdatePetSets 5m18s KubeDB Ops-manager Operator successfully reconciled the Druid resources
+ Normal UpdatePetSets 5m8s KubeDB Ops-manager Operator successfully reconciled the Druid resources
+ Normal UpdatePetSets 4m57s KubeDB Ops-manager Operator successfully reconciled the Druid resources
+```
+
+```bash
+$ kubectl describe druidopsrequest -n demo drops-druid-cluster-gq9huj
+Name: drops-druid-cluster-gq9huj
+Namespace: demo
+Labels: app.kubernetes.io/component=database
+ app.kubernetes.io/instance=druid-cluster
+ app.kubernetes.io/managed-by=kubedb.com
+ app.kubernetes.io/name=druids.kubedb.com
+Annotations:
+API Version: ops.kubedb.com/v1alpha1
+Kind: DruidOpsRequest
+Metadata:
+ Creation Timestamp: 2024-10-25T10:01:14Z
+ Generation: 1
+ Managed Fields:
+ API Version: ops.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:metadata:
+ f:labels:
+ .:
+ f:app.kubernetes.io/component:
+ f:app.kubernetes.io/instance:
+ f:app.kubernetes.io/managed-by:
+ f:app.kubernetes.io/name:
+ f:ownerReferences:
+ .:
+ k:{"uid":"57cbd906-a9b7-4649-bfe0-304840bb60c1"}:
+ f:spec:
+ .:
+ f:apply:
+ f:databaseRef:
+ f:type:
+ f:volumeExpansion:
+ .:
+ f:middleManagers:
+ f:mode:
+ Manager: kubedb-autoscaler
+ Operation: Update
+ Time: 2024-10-25T10:01:14Z
+ API Version: ops.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:status:
+ .:
+ f:conditions:
+ f:observedGeneration:
+ f:phase:
+ Manager: kubedb-ops-manager
+ Operation: Update
+ Subresource: status
+ Time: 2024-10-25T10:04:12Z
+ Owner References:
+ API Version: autoscaling.kubedb.com/v1alpha1
+ Block Owner Deletion: true
+ Controller: true
+ Kind: DruidAutoscaler
+ Name: druid-storage-autoscaler
+ UID: 57cbd906-a9b7-4649-bfe0-304840bb60c1
+ Resource Version: 228783
+ UID: 3b97380c-e867-467f-b366-4b50c7cd7d6d
+Spec:
+ Apply: IfReady
+ Database Ref:
+ Name: druid-cluster
+ Type: VolumeExpansion
+ Volume Expansion:
+ Middle Managers: 2041405440
+ Mode: Offline
+Status:
+ Conditions:
+ Last Transition Time: 2024-10-25T10:01:14Z
+ Message: Druid ops-request has started to expand volume of druid nodes.
+ Observed Generation: 1
+ Reason: VolumeExpansion
+ Status: True
+ Type: VolumeExpansion
+ Last Transition Time: 2024-10-25T10:01:22Z
+ Message: get pet set; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: GetPetSet
+ Last Transition Time: 2024-10-25T10:01:22Z
+ Message: is pet set deleted; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: IsPetSetDeleted
+ Last Transition Time: 2024-10-25T10:01:32Z
+ Message: successfully deleted the petSets with orphan propagation policy
+ Observed Generation: 1
+ Reason: OrphanPetSetPods
+ Status: True
+ Type: OrphanPetSetPods
+ Last Transition Time: 2024-10-25T10:01:37Z
+ Message: get pod; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: GetPod
+ Last Transition Time: 2024-10-25T10:01:37Z
+ Message: is ops req patched; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: IsOpsReqPatched
+ Last Transition Time: 2024-10-25T10:01:37Z
+ Message: create pod; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: CreatePod
+ Last Transition Time: 2024-10-25T10:01:42Z
+ Message: get pvc; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: GetPvc
+ Last Transition Time: 2024-10-25T10:01:42Z
+ Message: is pvc patched; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: IsPvcPatched
+ Last Transition Time: 2024-10-25T10:03:32Z
+ Message: compare storage; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: CompareStorage
+ Last Transition Time: 2024-10-25T10:03:32Z
+ Message: create; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: Create
+ Last Transition Time: 2024-10-25T10:03:40Z
+ Message: is druid running; ConditionStatus:False
+ Observed Generation: 1
+ Status: False
+ Type: IsDruidRunning
+ Last Transition Time: 2024-10-25T10:03:52Z
+ Message: successfully updated middleManagers node PVC sizes
+ Observed Generation: 1
+ Reason: UpdateMiddleManagersNodePVCs
+ Status: True
+ Type: UpdateMiddleManagersNodePVCs
+ Last Transition Time: 2024-10-25T10:04:07Z
+ Message: successfully reconciled the Druid resources
+ Observed Generation: 1
+ Reason: UpdatePetSets
+ Status: True
+ Type: UpdatePetSets
+ Last Transition Time: 2024-10-25T10:04:12Z
+ Message: PetSet is recreated
+ Observed Generation: 1
+ Reason: ReadyPetSets
+ Status: True
+ Type: ReadyPetSets
+ Last Transition Time: 2024-10-25T10:04:12Z
+ Message: Successfully completed volumeExpansion for Druid
+ Observed Generation: 1
+ Reason: Successful
+ Status: True
+ Type: Successful
+ Observed Generation: 1
+ Phase: Successful
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Starting 5m33s KubeDB Ops-manager Operator Start processing for DruidOpsRequest: demo/drops-druid-cluster-gq9huj
+ Normal Starting 5m33s KubeDB Ops-manager Operator Pausing Druid databse: demo/druid-cluster
+ Normal Successful 5m33s KubeDB Ops-manager Operator Successfully paused Druid database: demo/druid-cluster for DruidOpsRequest: drops-druid-cluster-gq9huj
+ Warning get pet set; ConditionStatus:True 5m25s KubeDB Ops-manager Operator get pet set; ConditionStatus:True
+ Warning is pet set deleted; ConditionStatus:True 5m25s KubeDB Ops-manager Operator is pet set deleted; ConditionStatus:True
+ Warning get pet set; ConditionStatus:True 5m20s KubeDB Ops-manager Operator get pet set; ConditionStatus:True
+ Normal OrphanPetSetPods 5m15s KubeDB Ops-manager Operator successfully deleted the petSets with orphan propagation policy
+ Warning get pod; ConditionStatus:True 5m10s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning is ops req patched; ConditionStatus:True 5m10s KubeDB Ops-manager Operator is ops req patched; ConditionStatus:True
+ Warning create pod; ConditionStatus:True 5m10s KubeDB Ops-manager Operator create pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 5m5s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 5m5s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning is pvc patched; ConditionStatus:True 5m5s KubeDB Ops-manager Operator is pvc patched; ConditionStatus:True
+ Warning compare storage; ConditionStatus:False 5m5s KubeDB Ops-manager Operator compare storage; ConditionStatus:False
+ Warning get pod; ConditionStatus:True 5m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 5m KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 4m55s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 4m55s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 4m50s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 4m50s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 4m45s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 4m45s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 4m40s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 4m40s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 4m35s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 4m35s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 4m30s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 4m30s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 4m25s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 4m25s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 4m20s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 4m20s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 4m15s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 4m15s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 4m10s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 4m10s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 4m5s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 4m5s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 4m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 4m KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 3m55s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 3m55s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 3m50s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 3m50s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 3m45s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 3m45s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 3m40s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 3m40s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 3m35s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 3m35s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 3m30s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 3m30s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 3m25s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 3m25s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 3m20s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 3m20s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 3m15s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 3m15s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning compare storage; ConditionStatus:True 3m15s KubeDB Ops-manager Operator compare storage; ConditionStatus:True
+ Warning create; ConditionStatus:True 3m15s KubeDB Ops-manager Operator create; ConditionStatus:True
+ Warning is ops req patched; ConditionStatus:True 3m15s KubeDB Ops-manager Operator is ops req patched; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 3m10s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning is druid running; ConditionStatus:False 3m7s KubeDB Ops-manager Operator is druid running; ConditionStatus:False
+ Warning get pod; ConditionStatus:True 3m5s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 3m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Normal UpdateMiddleManagersNodePVCs 2m55s KubeDB Ops-manager Operator successfully updated middleManagers node PVC sizes
+ Normal UpdatePetSets 2m40s KubeDB Ops-manager Operator successfully reconciled the Druid resources
+ Warning get pet set; ConditionStatus:True 2m35s KubeDB Ops-manager Operator get pet set; ConditionStatus:True
+ Normal ReadyPetSets 2m35s KubeDB Ops-manager Operator PetSet is recreated
+ Normal Starting 2m35s KubeDB Ops-manager Operator Resuming Druid database: demo/druid-cluster
+ Normal Successful 2m35s KubeDB Ops-manager Operator Successfully resumed Druid database: demo/druid-cluster for DruidOpsRequest: drops-druid-cluster-gq9huj
+```
+
+Now, we are going to verify from the `Petset`, and the `Persistent Volume` whether the volume of the topology cluster has expanded to meet the desired state, Let's check,
+
+```bash
+$ kubectl get petset -n demo druid-cluster-historicals -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
+"2041405440"
+$ kubectl get petset -n demo druid-cluster-middleManagers -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
+"2041405440"
+$ kubectl get pv -n demo
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
+pvc-2c0ef2aa-0438-4d75-9cb2-c12a176bae6a 1948Mi RWO Delete Bound demo/druid-cluster-base-task-dir-druid-cluster-middlemanagers-0 longhorn 19m
+pvc-5f4cea5f-e0c8-4339-b67c-9cb8b02ba49d 1948Mi RWO Delete Bound demo/druid-cluster-segment-cache-druid-cluster-historicals-0 longhorn 19m
+```
+
+The above output verifies that we have successfully autoscaled the volume of the Druid topology cluster for both historicals and middleManagers.
+
+## Cleaning Up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+```bash
+kubectl delete druidopsrequests -n demo drops-druid-cluster-gq9huj drops-druid-cluster-kbw4fd
+kubectl delete druidutoscaler -n demo druid-storage-autoscaler
+kubectl delete dr -n demo druid-cluster
+```
+
+## Next Steps
+
+- Detail concepts of [Druid object](/docs/guides/druid/concepts/druid.md).
+- Different Druid topology clustering modes [here](/docs/guides/druid/clustering/_index.md).
+- Monitor your Druid database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/druid/monitoring/using-prometheus-operator.md).
+
+[//]: # (- Monitor your Druid database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/druid/monitoring/using-builtin-prometheus.md).)
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
diff --git a/docs/guides/druid/autoscaler/storage/images/storage-autoscaling.png b/docs/guides/druid/autoscaler/storage/images/storage-autoscaling.png
new file mode 100644
index 0000000000..a48b564e1c
Binary files /dev/null and b/docs/guides/druid/autoscaler/storage/images/storage-autoscaling.png differ
diff --git a/docs/guides/druid/autoscaler/storage/overview.md b/docs/guides/druid/autoscaler/storage/overview.md
new file mode 100644
index 0000000000..607e246032
--- /dev/null
+++ b/docs/guides/druid/autoscaler/storage/overview.md
@@ -0,0 +1,55 @@
+---
+title: Druid Storage Autoscaling Overview
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-autoscaler-storage-overview
+ name: Overview
+ parent: guides-druid-autoscaler-storage
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Druid Vertical Autoscaling
+
+This guide will give an overview on how KubeDB Autoscaler operator autoscales the database storage using `druidautoscaler` crd.
+
+## Before You Begin
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [Druid](/docs/guides/druid/concepts/druid.md)
+ - [DruidAutoscaler](/docs/guides/druid/concepts/druidautoscaler.md)
+ - [DruidOpsRequest](/docs/guides/druid/concepts/druidopsrequest.md)
+
+## How Storage Autoscaling Works
+
+The following diagram shows how KubeDB Autoscaler operator autoscales the resources of `Druid` cluster components. Open the image in a new tab to see the enlarged version.
+
+
+
+The Auto Scaling process consists of the following steps:
+
+1. At first, a user creates a `Druid` Custom Resource (CR).
+
+2. `KubeDB` Provisioner operator watches the `Druid` CR.
+
+3. When the operator finds a `Druid` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc.
+
+4. Each PetSet creates a Persistent Volume according to the Volume Claim Template provided in the petset configuration.
+
+5. Then, in order to set up storage autoscaling of the druid data nodes (i.e. Historicals, MiddleManagers) of the `Druid` cluster, the user creates a `DruidAutoscaler` CRO with desired configuration.
+
+6. `KubeDB` Autoscaler operator watches the `DruidAutoscaler` CRO.
+
+7. `KubeDB` Autoscaler operator continuously watches persistent volumes of the clusters to check if it exceeds the specified usage threshold. If the usage exceeds the specified usage threshold, then `KubeDB` Autoscaler operator creates a `DruidOpsRequest` to expand the storage of the database.
+
+8. `KubeDB` Ops-manager operator watches the `DruidOpsRequest` CRO.
+
+9. Then the `KubeDB` Ops-manager operator will expand the storage of the cluster component as specified on the `DruidOpsRequest` CRO.
+
+In the next docs, we are going to show a step-by-step guide on Autoscaling storage of various Druid cluster components using `DruidAutoscaler` CRD.
diff --git a/docs/guides/druid/autoscaler/storage/yamls/deep-storage-config.yaml b/docs/guides/druid/autoscaler/storage/yamls/deep-storage-config.yaml
new file mode 100644
index 0000000000..3612595828
--- /dev/null
+++ b/docs/guides/druid/autoscaler/storage/yamls/deep-storage-config.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+
diff --git a/docs/guides/druid/autoscaler/storage/yamls/druid-cluster.yaml b/docs/guides/druid/autoscaler/storage/yamls/druid-cluster.yaml
new file mode 100644
index 0000000000..5415590a2b
--- /dev/null
+++ b/docs/guides/druid/autoscaler/storage/yamls/druid-cluster.yaml
@@ -0,0 +1,40 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ historicals:
+ replicas: 1
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ storageType: Durable
+ middleManagers:
+ replicas: 1
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ storageType: Durable
+ routers:
+ replicas: 1
+ deletionPolicy: Delete
+ monitor:
+ agent: prometheus.io/operator
+ prometheus:
+ serviceMonitor:
+ labels:
+ release: prometheus
+ interval: 10s
diff --git a/docs/guides/druid/autoscaler/storage/yamls/druid-storage-autoscaler.yaml b/docs/guides/druid/autoscaler/storage/yamls/druid-storage-autoscaler.yaml
new file mode 100644
index 0000000000..d1a2f5c438
--- /dev/null
+++ b/docs/guides/druid/autoscaler/storage/yamls/druid-storage-autoscaler.yaml
@@ -0,0 +1,19 @@
+apiVersion: autoscaling.kubedb.com/v1alpha1
+kind: DruidAutoscaler
+metadata:
+ name: druid-storage-autoscaler
+ namespace: demo
+spec:
+ databaseRef:
+ name: druid-cluster
+ storage:
+ historicals:
+ expansionMode: "Offline"
+ trigger: "On"
+ usageThreshold: 60
+ scalingThreshold: 100
+ middleManagers:
+ expansionMode: "Offline"
+ trigger: "On"
+ usageThreshold: 60
+ scalingThreshold: 100
diff --git a/docs/guides/druid/backup/_index.md b/docs/guides/druid/backup/_index.md
index bb47dcc106..31146d6c14 100644
--- a/docs/guides/druid/backup/_index.md
+++ b/docs/guides/druid/backup/_index.md
@@ -5,6 +5,6 @@ menu:
identifier: guides-druid-backup
name: Backup & Restore
parent: guides-druid
- weight: 40
+ weight: 50
menu_name: docs_{{ .version }}
---
\ No newline at end of file
diff --git a/docs/guides/druid/backup/application-level/index.md b/docs/guides/druid/backup/application-level/index.md
index 4c6394ca81..627c865406 100644
--- a/docs/guides/druid/backup/application-level/index.md
+++ b/docs/guides/druid/backup/application-level/index.md
@@ -20,7 +20,7 @@ This guide will give you how you can take application-level backup and restore y
## Before You Begin
- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using `Minikube` or `Kind`.
-- Install `KubeDB` in your cluster following the steps [here](/docs/setup/README.md).
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md) and make sure to include the flags `--set global.featureGates.Druid=true` to ensure **Druid CRD** and `--set global.featureGates.ZooKeeper=true` to ensure **ZooKeeper CRD** as Druid depends on ZooKeeper for external dependency with helm command.
- Install `KubeStash` in your cluster following the steps [here](https://kubestash.com/docs/latest/setup/install/kubestash).
- Install KubeStash `kubectl` plugin following the steps [here](https://kubestash.com/docs/latest/setup/install/kubectl-plugin/).
- If you are not familiar with how KubeStash backup and restore Druid databases, please check the following guide [here](/docs/guides/druid/backup/overview/index.md).
@@ -52,7 +52,6 @@ This section will demonstrate how to take application-level backup of a `Druid`
## Deploy Sample Druid Database
-
**Create External Dependency (Deep Storage):**
One of the external dependency of Druid is deep storage where the segments are stored. It is a storage mechanism that Apache Druid does not provide. **Amazon S3**, **Google Cloud Storage**, or **Azure Blob Storage**, **S3-compatible storage** (like **Minio**), or **HDFS** are generally convenient options for deep storage.
diff --git a/docs/guides/druid/backup/auto-backup/index.md b/docs/guides/druid/backup/auto-backup/index.md
index 3c16e8bf33..d2ba2b5bbc 100644
--- a/docs/guides/druid/backup/auto-backup/index.md
+++ b/docs/guides/druid/backup/auto-backup/index.md
@@ -20,7 +20,7 @@ In this tutorial, we are going to show how you can configure a backup blueprint
## Before You Begin
- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using `Minikube` or `Kind`.
-- Install `KubeDB` in your cluster following the steps [here](/docs/setup/README.md).
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md) and make sure to include the flags `--set global.featureGates.Druid=true` to ensure **Druid CRD** and `--set global.featureGates.ZooKeeper=true` to ensure **ZooKeeper CRD** as Druid depends on ZooKeeper for external dependency with helm command.
- Install `KubeStash` in your cluster following the steps [here](https://kubestash.com/docs/latest/setup/install/kubestash).
- Install KubeStash `kubectl` plugin following the steps [here](https://kubestash.com/docs/latest/setup/install/kubectl-plugin/).
- If you are not familiar with how KubeStash backup and restore Druid databases, please check the following guide [here](/docs/guides/druid/backup/overview/index.md).
diff --git a/docs/guides/druid/backup/cross-ns-dependencies/index.md b/docs/guides/druid/backup/cross-ns-dependencies/index.md
index a3f200ec6f..9cbb5a066e 100644
--- a/docs/guides/druid/backup/cross-ns-dependencies/index.md
+++ b/docs/guides/druid/backup/cross-ns-dependencies/index.md
@@ -22,7 +22,7 @@ This guide will give you how you can take [Application Level Backup](https://git
## Before You Begin
- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using `Minikube` or `Kind`.
-- Install `KubeDB` in your cluster following the steps [here](/docs/setup/README.md).
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md) and make sure to include the flags `--set global.featureGates.Druid=true` to ensure **Druid CRD** and `--set global.featureGates.ZooKeeper=true` to ensure **ZooKeeper CRD** as Druid depends on ZooKeeper for external dependency with helm command.
- Install `KubeStash` in your cluster following the steps [here](https://kubestash.com/docs/latest/setup/install/kubestash).
- Install KubeStash `kubectl` plugin following the steps [here](https://kubestash.com/docs/latest/setup/install/kubectl-plugin/).
- If you are not familiar with how KubeStash backup and restore Druid databases, please check the following guide [here](/docs/guides/druid/backup/overview/index.md).
diff --git a/docs/guides/druid/backup/logical/index.md b/docs/guides/druid/backup/logical/index.md
index dbce094ab8..e6e1d06447 100644
--- a/docs/guides/druid/backup/logical/index.md
+++ b/docs/guides/druid/backup/logical/index.md
@@ -20,7 +20,7 @@ This guide will give you how you can take backup and restore your `Druid` databa
## Before You Begin
- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using `Minikube` or `Kind`.
-- Install `KubeDB` in your cluster following the steps [here](/docs/setup/README.md).
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md) and make sure to include the flags `--set global.featureGates.Druid=true` to ensure **Druid CRD** and `--set global.featureGates.ZooKeeper=true` to ensure **ZooKeeper CRD** as Druid depends on ZooKeeper for external dependency with helm command.
- Install `KubeStash` in your cluster following the steps [here](https://kubestash.com/docs/latest/setup/install/kubestash).
- Install KubeStash `kubectl` plugin following the steps [here](https://kubestash.com/docs/latest/setup/install/kubectl-plugin/).
- If you are not familiar with how KubeStash backup and restore Druid databases, please check the following guide [here](/docs/guides/druid/backup/overview/index.md).
diff --git a/docs/guides/druid/backup/overview/index.md b/docs/guides/druid/backup/overview/index.md
index c8adef63e6..723754834f 100644
--- a/docs/guides/druid/backup/overview/index.md
+++ b/docs/guides/druid/backup/overview/index.md
@@ -2,7 +2,7 @@
title: Backup & Restore Druid Overview
menu:
docs_{{ .version }}:
- identifier: guides-druid-backup-overview
+ identifier: guides-druid-backup-guide
name: Overview
parent: guides-druid-backup
weight: 10
diff --git a/docs/guides/druid/clustering/_index.md b/docs/guides/druid/clustering/_index.md
new file mode 100644
index 0000000000..20b929a8a2
--- /dev/null
+++ b/docs/guides/druid/clustering/_index.md
@@ -0,0 +1,10 @@
+---
+title: Druid Clustering
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-clustering
+ name: Clustering
+ parent: guides-druid
+ weight: 30
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/druid/clustering/guide/index.md b/docs/guides/druid/clustering/guide/index.md
new file mode 100644
index 0000000000..b062348a72
--- /dev/null
+++ b/docs/guides/druid/clustering/guide/index.md
@@ -0,0 +1,925 @@
+---
+title: Druid Topology Cluster Guide
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-clustering-guide
+ name: Deploy Druid Cluster
+ parent: guides-druid-clustering
+ weight: 20
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# KubeDB - Druid Cluster
+
+This tutorial will show you how to use KubeDB to provision a Druid Cluster.
+
+## Before You Begin
+
+Before proceeding:
+
+- Read [druid topology cluster overview](/docs/guides/druid/clustering/overview/index.md) to get a basic idea about the design and architecture of Druid.
+
+- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md) and make sure to include the flags `--set global.featureGates.Druid=true` to ensure **Druid CRD** and `--set global.featureGates.ZooKeeper=true` to ensure **ZooKeeper CRD** as Druid depends on ZooKeeper for external dependency with helm command.
+
+- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial:
+
+ ```bash
+ $ kubectl create ns demo
+ namespace/demo created
+ ```
+
+> Note: The yaml files used in this tutorial are stored in [docs/guides/druid/clustering/topology-cluster-guide/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/druid/clustering/topology-cluster-guide/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
+
+## Create External Dependency (Deep Storage)
+
+Before proceeding further, we need to prepare deep storage, which is one of the external dependency of Druid and used for storing the segments. It is a storage mechanism that Apache Druid does not provide. **Amazon S3**, **Google Cloud Storage**, or **Azure Blob Storage**, **S3-compatible storage** (like **Minio**), or **HDFS** are generally convenient options for deep storage.
+
+In this tutorial, we will run a `minio-server` as deep storage in our local `kind` cluster using `minio-operator` and create a bucket named `druid` in it, which the deployed druid database will use.
+
+```bash
+
+$ helm repo add minio https://operator.min.io/
+$ helm repo update minio
+$ helm upgrade --install --namespace "minio-operator" --create-namespace "minio-operator" minio/operator --set operator.replicaCount=1
+
+$ helm upgrade --install --namespace "demo" --create-namespace druid-minio minio/tenant \
+--set tenant.pools[0].servers=1 \
+--set tenant.pools[0].volumesPerServer=1 \
+--set tenant.pools[0].size=1Gi \
+--set tenant.certificate.requestAutoCert=false \
+--set tenant.buckets[0].name="druid" \
+--set tenant.pools[0].name="default"
+
+```
+
+Now we need to create a `Secret` named `deep-storage-config`. It contains the necessary connection information using which the druid database will connect to the deep storage.
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+```
+
+Let’s create the `deep-storage-config` Secret shown above:
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/backup/application-level/examples/deep-storage-config.yaml
+secret/deep-storage-config created
+```
+
+## Deploy Druid Cluster
+
+The following is an example `Druid` object which creates a Druid cluster of six nodes (coordinators, overlords, brokers, routers, historicals and middleManager). Each with one replica.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ deletionPolicy: Delete
+```
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/clustering/guide/yamls/druid-with-monitoring.yaml
+druid.kubedb.com/druid-cluster created
+```
+
+KubeDB operator watches for `Druid` objects using Kubernetes API. When a `Druid` object is created, KubeDB operator will create new PetSets and Services with the matching Druid object name. KubeDB operator will also create a governing service for the PetSet with the name `-pods`.
+
+```bash
+$ kubectl describe druid -n demo druid-cluster
+Name: druid-cluster
+Namespace: demo
+Labels:
+Annotations:
+API Version: kubedb.com/v1alpha2
+Kind: Druid
+Metadata:
+ Creation Timestamp: 2024-10-21T06:01:32Z
+ Finalizers:
+ kubedb.com/druid
+ Generation: 1
+ Managed Fields:
+ API Version: kubedb.com/v1alpha2
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:metadata:
+ f:finalizers:
+ .:
+ v:"kubedb.com/druid":
+ Manager: druid-operator
+ Operation: Update
+ Time: 2024-10-21T06:01:32Z
+ API Version: kubedb.com/v1alpha2
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:metadata:
+ f:annotations:
+ .:
+ f:kubectl.kubernetes.io/last-applied-configuration:
+ f:spec:
+ .:
+ f:deepStorage:
+ .:
+ f:configSecret:
+ f:type:
+ f:deletionPolicy:
+ f:healthChecker:
+ .:
+ f:failureThreshold:
+ f:periodSeconds:
+ f:timeoutSeconds:
+ f:topology:
+ .:
+ f:routers:
+ .:
+ f:replicas:
+ f:version:
+ Manager: kubectl-client-side-apply
+ Operation: Update
+ Time: 2024-10-21T06:01:32Z
+ API Version: kubedb.com/v1alpha2
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:status:
+ .:
+ f:conditions:
+ f:phase:
+ Manager: druid-operator
+ Operation: Update
+ Subresource: status
+ Time: 2024-10-21T06:04:29Z
+ Resource Version: 52093
+ UID: a2e12db2-6694-419f-ad07-2c906df5b611
+Spec:
+ Auth Secret:
+ Name: druid-cluster-admin-cred
+ Deep Storage:
+ Config Secret:
+ Name: deep-storage-config
+ Type: s3
+ Deletion Policy: Delete
+ Health Checker:
+ Failure Threshold: 3
+ Period Seconds: 30
+ Timeout Seconds: 10
+ Metadata Storage:
+ Create Tables: true
+ Linked DB: druid
+ Name: druid-cluster-mysql-metadata
+ Namespace: demo
+ Type: MySQL
+ Version: 8.0.35
+ Topology:
+ Brokers:
+ Pod Template:
+ Controller:
+ Metadata:
+ Spec:
+ Containers:
+ Name: druid
+ Resources:
+ Limits:
+ Memory: 1Gi
+ Requests:
+ Cpu: 500m
+ Memory: 1Gi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Init Containers:
+ Name: init-druid
+ Resources:
+ Limits:
+ Memory: 512Mi
+ Requests:
+ Cpu: 200m
+ Memory: 512Mi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Pod Placement Policy:
+ Name: default
+ Security Context:
+ Fs Group: 1000
+ Replicas: 1
+ Coordinators:
+ Pod Template:
+ Controller:
+ Metadata:
+ Spec:
+ Containers:
+ Name: druid
+ Resources:
+ Limits:
+ Memory: 1Gi
+ Requests:
+ Cpu: 500m
+ Memory: 1Gi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Init Containers:
+ Name: init-druid
+ Resources:
+ Limits:
+ Memory: 512Mi
+ Requests:
+ Cpu: 200m
+ Memory: 512Mi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Pod Placement Policy:
+ Name: default
+ Security Context:
+ Fs Group: 1000
+ Replicas: 1
+ Historicals:
+ Pod Template:
+ Controller:
+ Metadata:
+ Spec:
+ Containers:
+ Name: druid
+ Resources:
+ Limits:
+ Memory: 1Gi
+ Requests:
+ Cpu: 500m
+ Memory: 1Gi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Init Containers:
+ Name: init-druid
+ Resources:
+ Limits:
+ Memory: 512Mi
+ Requests:
+ Cpu: 200m
+ Memory: 512Mi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Pod Placement Policy:
+ Name: default
+ Security Context:
+ Fs Group: 1000
+ Replicas: 1
+ Storage:
+ Access Modes:
+ ReadWriteOnce
+ Resources:
+ Requests:
+ Storage: 1Gi
+ Storage Type: Durable
+ Middle Managers:
+ Pod Template:
+ Controller:
+ Metadata:
+ Spec:
+ Containers:
+ Name: druid
+ Resources:
+ Limits:
+ Memory: 2560Mi
+ Requests:
+ Cpu: 500m
+ Memory: 2560Mi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Init Containers:
+ Name: init-druid
+ Resources:
+ Limits:
+ Memory: 512Mi
+ Requests:
+ Cpu: 200m
+ Memory: 512Mi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Pod Placement Policy:
+ Name: default
+ Security Context:
+ Fs Group: 1000
+ Replicas: 1
+ Storage:
+ Access Modes:
+ ReadWriteOnce
+ Resources:
+ Requests:
+ Storage: 1Gi
+ Storage Type: Durable
+ Routers:
+ Pod Template:
+ Controller:
+ Metadata:
+ Spec:
+ Containers:
+ Name: druid
+ Resources:
+ Limits:
+ Memory: 1Gi
+ Requests:
+ Cpu: 500m
+ Memory: 1Gi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Init Containers:
+ Name: init-druid
+ Resources:
+ Limits:
+ Memory: 512Mi
+ Requests:
+ Cpu: 200m
+ Memory: 512Mi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Pod Placement Policy:
+ Name: default
+ Security Context:
+ Fs Group: 1000
+ Replicas: 1
+ Version: 28.0.1
+ Zookeeper Ref:
+ Name: druid-cluster-zk
+ Namespace: demo
+ Version: 3.7.2
+Status:
+ Conditions:
+ Last Transition Time: 2024-10-21T06:01:32Z
+ Message: The KubeDB operator has started the provisioning of Druid: demo/druid-cluster
+ Observed Generation: 1
+ Reason: DatabaseProvisioningStartedSuccessfully
+ Status: True
+ Type: ProvisioningStarted
+ Phase: Provisioning
+Events:
+
+$ kubectl get petset -n demo
+NAME AGE
+druid-cluster-brokers 13m
+druid-cluster-coordinators 13m
+druid-cluster-historicals 13m
+druid-cluster-middlemanagers 13m
+druid-cluster-mysql-metadata 14m
+druid-cluster-routers 13m
+druid-cluster-zk 14m
+
+$ kubectl get pvc -n demo -l app.kubernetes.io/name=druids.kubedb.com
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+druid-cluster-base-task-dir-druid-cluster-middlemanagers-0 Bound pvc-d288b621-d281-4004-995d-7a25bb4149de 1Gi RWO standard 14m
+druid-cluster-segment-cache-druid-cluster-historicals-0 Bound pvc-ccca6be2-658a-46af-a270-de1c6a041af7 1Gi RWO standard 14m
+
+
+$ kubectl get pv -n demo
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
+pvc-4f8538f6-a6ce-4233-b533-8566852f5b98 1Gi RWO Delete Bound demo/druid-cluster-base-task-dir-druid-cluster-middlemanagers-0 standard 4m39s
+pvc-8823d3ad-d614-4172-89ac-c2284a17f502 1Gi RWO Delete Bound demo/druid-cluster-segment-cache-druid-cluster-historicals-0 standard 4m35s
+
+$ kubectl get service -n demo
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+druid-cluster-brokers ClusterIP 10.96.186.168 8082/TCP 17m
+druid-cluster-coordinators ClusterIP 10.96.122.235 8081/TCP 17m
+druid-cluster-mysql-metadata ClusterIP 10.96.109.2 3306/TCP 18m
+druid-cluster-mysql-metadata-pods ClusterIP None 3306/TCP 18m
+druid-cluster-mysql-metadata-standby ClusterIP 10.96.97.152 3306/TCP 18m
+druid-cluster-pods ClusterIP None 8081/TCP,8090/TCP,8083/TCP,8091/TCP,8082/TCP,8888/TCP 17m
+druid-cluster-routers ClusterIP 10.96.138.237 8888/TCP 17m
+druid-cluster-zk ClusterIP 10.96.148.251 2181/TCP 18m
+druid-cluster-zk-admin-server ClusterIP 10.96.2.106 8080/TCP 18m
+druid-cluster-zk-pods ClusterIP None 2181/TCP,2888/TCP,3888/TCP 18m
+```
+
+KubeDB operator sets the `status.phase` to `Ready` once the database is successfully created. Run the following command to see the modified `Druid` object:
+
+```bash
+$ kubectl describe druid -n demo druid-cluster
+Name: druid-cluster
+Namespace: demo
+Labels:
+Annotations:
+API Version: kubedb.com/v1alpha2
+Kind: Druid
+Metadata:
+ Creation Timestamp: 2024-10-21T06:01:32Z
+ Finalizers:
+ kubedb.com/druid
+ Generation: 1
+ Managed Fields:
+ API Version: kubedb.com/v1alpha2
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:metadata:
+ f:finalizers:
+ .:
+ v:"kubedb.com/druid":
+ Manager: druid-operator
+ Operation: Update
+ Time: 2024-10-21T06:01:32Z
+ API Version: kubedb.com/v1alpha2
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:metadata:
+ f:annotations:
+ .:
+ f:kubectl.kubernetes.io/last-applied-configuration:
+ f:spec:
+ .:
+ f:deepStorage:
+ .:
+ f:configSecret:
+ f:type:
+ f:deletionPolicy:
+ f:healthChecker:
+ .:
+ f:failureThreshold:
+ f:periodSeconds:
+ f:timeoutSeconds:
+ f:topology:
+ .:
+ f:routers:
+ .:
+ f:replicas:
+ f:version:
+ Manager: kubectl-client-side-apply
+ Operation: Update
+ Time: 2024-10-21T06:01:32Z
+ API Version: kubedb.com/v1alpha2
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:status:
+ .:
+ f:conditions:
+ f:phase:
+ Manager: druid-operator
+ Operation: Update
+ Subresource: status
+ Time: 2024-10-21T06:04:29Z
+ Resource Version: 52093
+ UID: a2e12db2-6694-419f-ad07-2c906df5b611
+Spec:
+ Auth Secret:
+ Name: druid-cluster-admin-cred
+ Deep Storage:
+ Config Secret:
+ Name: deep-storage-config
+ Type: s3
+ Deletion Policy: Delete
+ Health Checker:
+ Failure Threshold: 3
+ Period Seconds: 30
+ Timeout Seconds: 10
+ Metadata Storage:
+ Create Tables: true
+ Linked DB: druid
+ Name: druid-cluster-mysql-metadata
+ Namespace: demo
+ Type: MySQL
+ Version: 8.0.35
+ Topology:
+ Brokers:
+ Pod Template:
+ Controller:
+ Metadata:
+ Spec:
+ Containers:
+ Name: druid
+ Resources:
+ Limits:
+ Memory: 1Gi
+ Requests:
+ Cpu: 500m
+ Memory: 1Gi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Init Containers:
+ Name: init-druid
+ Resources:
+ Limits:
+ Memory: 512Mi
+ Requests:
+ Cpu: 200m
+ Memory: 512Mi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Pod Placement Policy:
+ Name: default
+ Security Context:
+ Fs Group: 1000
+ Replicas: 1
+ Coordinators:
+ Pod Template:
+ Controller:
+ Metadata:
+ Spec:
+ Containers:
+ Name: druid
+ Resources:
+ Limits:
+ Memory: 1Gi
+ Requests:
+ Cpu: 500m
+ Memory: 1Gi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Init Containers:
+ Name: init-druid
+ Resources:
+ Limits:
+ Memory: 512Mi
+ Requests:
+ Cpu: 200m
+ Memory: 512Mi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Pod Placement Policy:
+ Name: default
+ Security Context:
+ Fs Group: 1000
+ Replicas: 1
+ Historicals:
+ Pod Template:
+ Controller:
+ Metadata:
+ Spec:
+ Containers:
+ Name: druid
+ Resources:
+ Limits:
+ Memory: 1Gi
+ Requests:
+ Cpu: 500m
+ Memory: 1Gi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Init Containers:
+ Name: init-druid
+ Resources:
+ Limits:
+ Memory: 512Mi
+ Requests:
+ Cpu: 200m
+ Memory: 512Mi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Pod Placement Policy:
+ Name: default
+ Security Context:
+ Fs Group: 1000
+ Replicas: 1
+ Storage:
+ Access Modes:
+ ReadWriteOnce
+ Resources:
+ Requests:
+ Storage: 1Gi
+ Storage Type: Durable
+ Middle Managers:
+ Pod Template:
+ Controller:
+ Metadata:
+ Spec:
+ Containers:
+ Name: druid
+ Resources:
+ Limits:
+ Memory: 2560Mi
+ Requests:
+ Cpu: 500m
+ Memory: 2560Mi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Init Containers:
+ Name: init-druid
+ Resources:
+ Limits:
+ Memory: 512Mi
+ Requests:
+ Cpu: 200m
+ Memory: 512Mi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Pod Placement Policy:
+ Name: default
+ Security Context:
+ Fs Group: 1000
+ Replicas: 1
+ Storage:
+ Access Modes:
+ ReadWriteOnce
+ Resources:
+ Requests:
+ Storage: 1Gi
+ Storage Type: Durable
+ Routers:
+ Pod Template:
+ Controller:
+ Metadata:
+ Spec:
+ Containers:
+ Name: druid
+ Resources:
+ Limits:
+ Memory: 1Gi
+ Requests:
+ Cpu: 500m
+ Memory: 1Gi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Init Containers:
+ Name: init-druid
+ Resources:
+ Limits:
+ Memory: 512Mi
+ Requests:
+ Cpu: 200m
+ Memory: 512Mi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Pod Placement Policy:
+ Name: default
+ Security Context:
+ Fs Group: 1000
+ Replicas: 1
+ Version: 28.0.1
+ Zookeeper Ref:
+ Name: druid-cluster-zk
+ Namespace: demo
+ Version: 3.7.2
+Status:
+ Conditions:
+ Last Transition Time: 2024-10-21T06:01:32Z
+ Message: The KubeDB operator has started the provisioning of Druid: demo/druid-cluster
+ Observed Generation: 1
+ Reason: DatabaseProvisioningStartedSuccessfully
+ Status: True
+ Type: ProvisioningStarted
+ Last Transition Time: 2024-10-21T06:03:03Z
+ Message: Database dependency is ready
+ Observed Generation: 1
+ Reason: DatabaseDependencyReady
+ Status: True
+ Type: DatabaseDependencyReady
+ Last Transition Time: 2024-10-21T06:03:34Z
+ Message: All desired replicas are ready.
+ Observed Generation: 1
+ Reason: AllReplicasReady
+ Status: True
+ Type: ReplicaReady
+ Last Transition Time: 2024-10-21T06:04:04Z
+ Message: The Druid: demo/druid-cluster is accepting client requests and nodes formed a cluster
+ Observed Generation: 1
+ Reason: DatabaseAcceptingConnectionRequest
+ Status: True
+ Type: AcceptingConnection
+ Last Transition Time: 2024-10-21T06:04:29Z
+ Message: The Druid: demo/druid-cluster is ready.
+ Observed Generation: 1
+ Reason: ReadinessCheckSucceeded
+ Status: True
+ Type: Ready
+ Last Transition Time: 2024-10-21T06:04:29Z
+ Message: The Druid: demo/druid-cluster is successfully provisioned.
+ Observed Generation: 1
+ Reason: DatabaseSuccessfullyProvisioned
+ Status: True
+ Type: Provisioned
+ Phase: Ready
+Events:
+```
+
+
+## Connect with Druid Database
+We will use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to connect with our routers of the Druid database. Then we will use `curl` to send `HTTP` requests to check cluster health to verify that our Druid database is working well. It is also possible to use `External-IP` to access druid nodes if you make `service` type of that node as `LoadBalancer`.
+
+### Check the Service Health
+
+Let's port-forward the port `8888` to local machine:
+
+```bash
+$ kubectl port-forward -n demo svc/druid-cluster-routers 8888
+Forwarding from 127.0.0.1:8888 -> 8888
+Forwarding from [::1]:8888 -> 8888
+```
+
+Now, the Druid cluster is accessible at `localhost:8888`. Let's check the [Service Health](https://druid.apache.org/docs/latest/api-reference/service-status-api/#get-service-health) of Routers of the Druid database.
+
+```bash
+$ curl "http://localhost:8888/status/health"
+true
+```
+From the retrieved health information above, we can see that our Druid cluster’s status is `true`, indicating that the service can receive API calls and is healthy. In the same way it possible to check the health of other druid nodes by port-forwarding the appropriate services.
+
+### Access the web console
+
+We can also access the [web console](https://druid.apache.org/docs/latest/operations/web-console) of Druid database from any browser by port-forwarding the routers in the same way shown in the aforementioned step or directly using the `External-IP` if the router service type is `LoadBalancer`.
+
+Now hit the `http://localhost:8888` from any browser, and you will be prompted to provide the credential of the druid database. By following the steps discussed below, you can get the credential generated by the KubeDB operator for your Druid database.
+
+**Connection information:**
+
+- Username:
+
+ ```bash
+ $ kubectl get secret -n demo druid-cluster-admin-cred -o jsonpath='{.data.username}' | base64 -d
+ admin
+ ```
+
+- Password:
+
+ ```bash
+ $ kubectl get secret -n demo druid-cluster-admin-cred -o jsonpath='{.data.password}' | base64 -d
+ LzJtVRX5E8MorFaf
+ ```
+
+After providing the credentials correctly, you should be able to access the web console like shown below.
+
+
+
+
+
+You can use this web console for loading data, managing datasources and tasks, and viewing server status and segment information. You can also run SQL and native Druid queries in the console.
+
+## Cleaning up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+```bash
+$ kubectl patch -n demo druid druid-cluster -p '{"spec":{"deletionPolicy":"WipeOut"}}' --type="merge"
+kafka.kubedb.com/druid-cluster patched
+
+$ kubectl delete dr druid-cluster -n demo
+druid.kubedb.com "druid-cluster" deleted
+
+$ kubectl delete namespace demo
+namespace "demo" deleted
+```
+
+## Next Steps
+
+- Detail concepts of [Druid object](/docs/guides/druid/concepts/druid.md).
+- Detail concepts of [DruidDBVersion object](/docs/guides/druid/concepts/druidversion.md).
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
diff --git a/docs/guides/druid/clustering/guide/yamls/deep-storage-config.yaml b/docs/guides/druid/clustering/guide/yamls/deep-storage-config.yaml
new file mode 100644
index 0000000000..3612595828
--- /dev/null
+++ b/docs/guides/druid/clustering/guide/yamls/deep-storage-config.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+
diff --git a/docs/guides/druid/clustering/guide/yamls/druid-cluster.yaml b/docs/guides/druid/clustering/guide/yamls/druid-cluster.yaml
new file mode 100644
index 0000000000..7a89d0dc91
--- /dev/null
+++ b/docs/guides/druid/clustering/guide/yamls/druid-cluster.yaml
@@ -0,0 +1,15 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ deletionPolicy: Delete
diff --git a/docs/guides/druid/clustering/overview/images/druid-architecture.svg b/docs/guides/druid/clustering/overview/images/druid-architecture.svg
new file mode 100644
index 0000000000..3f86a412cf
--- /dev/null
+++ b/docs/guides/druid/clustering/overview/images/druid-architecture.svg
@@ -0,0 +1,19 @@
+
+
diff --git a/docs/guides/druid/clustering/overview/index.md b/docs/guides/druid/clustering/overview/index.md
new file mode 100644
index 0000000000..24893bcce0
--- /dev/null
+++ b/docs/guides/druid/clustering/overview/index.md
@@ -0,0 +1,115 @@
+---
+title: Druid Topology Cluster Overview
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-clustering-overview
+ name: Druid Clustering Overview
+ parent: guides-druid-clustering
+ weight: 15
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Druid Architecture
+
+Druid has a distributed architecture that is designed to be cloud-friendly and easy to operate. You can configure and scale services independently for maximum flexibility over cluster operations. This design includes enhanced fault tolerance: an outage of one component does not immediately affect other components.
+
+The following diagram shows the services that make up the Druid architecture, their typical arrangement across servers, and how queries and data flow through this architecture.
+
+![Druid Architecture](/docs/guides/druid/clustering/overview/images/druid-architecture.svg)
+
+Image ref: [Druid Official Documentation](https://druid.apache.org/assets/images/druid-architecture-7db1cd79d2d70b2e5ccc73b6bebfcaa4.svg)
+
+
+## Druid services
+
+Druid has several types of services:
+
+- **Coordinator** manages data availability on the cluster.
+- **Overlord** controls the assignment of data ingestion workloads.
+- **Broker** handles queries from external clients.
+- **Router** routes requests to Brokers, Coordinators, and Overlords.
+- **Historical** stores queryable data.
+- **MiddleManager** and Peon ingest data.
+- **Indexer** serves an alternative to the MiddleManager + Peon task execution system.
+
+## Druid servers
+
+You can deploy Druid services according to your preferences. For ease of deployment, we recommend organizing them into three server types: **Master**, **Query**, and **Data**.
+
+### Master server
+A Master server manages data ingestion and availability. It is responsible for starting new ingestion jobs and coordinating availability of data on the Data server.
+
+Master servers divide operations between **Coordinator** and **Overlord** services.
+
+#### Coordinator service
+[Coordinator](https://druid.apache.org/docs/latest/design/coordinator/) services watch over the Historical services on the Data servers. They are responsible for assigning segments to specific servers, and for ensuring segments are well-balanced across Historicals.
+
+### Overlord service
+[Overlord](https://druid.apache.org/docs/latest/design/overlord/) services watch over the MiddleManager services on the Data servers and are the controllers of data ingestion into Druid. They are responsible for assigning ingestion tasks to MiddleManagers and for coordinating segment publishing.
+
+### Query server
+A Query server provides the endpoints that users and client applications interact with, routing queries to Data servers or other Query servers (and optionally proxied Master server requests).
+
+Query servers divide operations between Broker and Router services.
+
+#### Broker service
+[Broker](https://druid.apache.org/docs/latest/design/broker/) services receive queries from external clients and forward those queries to Data servers. When Brokers receive results from those subqueries, they merge those results and return them to the caller. Typically, you query Brokers rather than querying Historical or MiddleManager services on Data servers directly.
+
+#### Router service
+[Router](https://druid.apache.org/docs/latest/design/router/) services provide a unified API gateway in front of Brokers, Overlords, and Coordinators.
+
+The Router service also runs the web console, a UI for loading data, managing datasources and tasks, and viewing server status and segment information.
+
+### Data server
+A Data server executes ingestion jobs and stores queryable data.
+
+Data servers divide operations between Historical and MiddleManager services.
+
+#### Historical service
+[Historical](https://druid.apache.org/docs/latest/design/historical/) services handle storage and querying on historical data, including any streaming data that has been in the system long enough to be committed. Historical services download segments from deep storage and respond to queries about these segments. They don't accept writes.
+
+#### MiddleManager service
+[MiddleManager](https://druid.apache.org/docs/latest/design/middlemanager) services handle ingestion of new data into the cluster. They are responsible for reading from external data sources and publishing new Druid segments.
+
+## External dependencies
+In addition to its built-in service types, Druid also has three external dependencies. These are intended to be able to leverage existing infrastructure, where present.
+
+### Deep storage
+Druid uses deep storage to store any data that has been ingested into the system. Deep storage is shared file storage accessible by every Druid server. In a clustered deployment, this is typically a distributed object store like S3 or HDFS, or a network mounted filesystem. In a single-server deployment, this is typically local disk.
+
+Druid uses deep storage for the following purposes:
+
+- To store all the data you ingest. Segments that get loaded onto Historical services for low latency queries are also kept in deep storage for backup purposes. Additionally, segments that are only in deep storage can be used for queries from deep storage.
+- As a way to transfer data in the background between Druid services. Druid stores data in files called segments.
+
+Historical services cache data segments on local disk and serve queries from that cache as well as from an in-memory cache. Segments on disk for Historical services provide the low latency querying performance Druid is known for.
+
+You can also query directly from deep storage. When you query segments that exist only in deep storage, you trade some performance for the ability to query more of your data without necessarily having to scale your Historical services.
+
+When determining sizing for your storage, keep the following in mind:
+
+- Deep storage needs to be able to hold all the data that you ingest into Druid.
+- On disk storage for Historical services need to be able to accommodate the data you want to load onto them to run queries. The data on Historical services should be data you access frequently and need to run low latency queries for.
+
+- Deep storage is an important part of Druid's elastic, fault-tolerant design. Druid bootstraps from deep storage even if every single data server is lost and re-provisioned.
+
+For more details, please see the [Deep storage](https://druid.apache.org/docs/latest/design/deep-storage/) page.
+
+### Metadata storage
+The metadata storage holds various shared system metadata such as segment usage information and task information. In a clustered deployment, this is typically a traditional RDBMS like PostgreSQL or MySQL. In a single-server deployment, it is typically a locally-stored Apache Derby database.
+
+For more details, please see the [Metadata storage](https://druid.apache.org/docs/latest/design/metadata-storage/) page.
+
+### ZooKeeper
+Used for internal service discovery, coordination, and leader election.
+
+For more details, please see the [ZooKeeper](https://druid.apache.org/docs/latest/design/zookeeper/) page.
+
+
+## Next Steps
+
+- [Deploy Druid Cluster](/docs/guides/druid/clustering/overview/index.md) using KubeDB.
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md)
diff --git a/docs/guides/druid/concepts/_index.md b/docs/guides/druid/concepts/_index.md
index 67c3be7748..a5c61e7e6b 100755
--- a/docs/guides/druid/concepts/_index.md
+++ b/docs/guides/druid/concepts/_index.md
@@ -7,4 +7,4 @@ menu:
parent: guides-druid
weight: 20
menu_name: docs_{{ .version }}
----
+---
\ No newline at end of file
diff --git a/docs/guides/druid/concepts/appbinding.md b/docs/guides/druid/concepts/appbinding.md
index 60f9b5bb4d..02c4750799 100644
--- a/docs/guides/druid/concepts/appbinding.md
+++ b/docs/guides/druid/concepts/appbinding.md
@@ -22,218 +22,127 @@ If you deploy a database using [KubeDB](https://kubedb.com/docs/latest/welcome/)
KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`.
-[//]: # (## AppBinding CRD Specification)
+## AppBinding CRD Specification
+
+Like any official Kubernetes resource, an `AppBinding` has `TypeMeta`, `ObjectMeta` and `Spec` sections. However, unlike other Kubernetes resources, it does not have a `Status` section.
+
+An `AppBinding` object created by `KubeDB` for Druid database is shown below,
-[//]: # ()
-[//]: # (Like any official Kubernetes resource, an `AppBinding` has `TypeMeta`, `ObjectMeta` and `Spec` sections. However, unlike other Kubernetes resources, it does not have a `Status` section.)
+```yaml
+apiVersion: appcatalog.appscode.com/v1alpha1
+kind: AppBinding
+metadata:
+ annotations:
+ kubectl.kubernetes.io/last-applied-configuration: |
+ {"apiVersion":"kubedb.com/v1alpha2","kind":"Druid","metadata":{"annotations":{},"name":"druid-quickstart","namespace":"demo"},"spec":{"deepStorage":{"configSecret":{"name":"deep-storage-config"},"type":"s3"},"topology":{"routers":{"replicas":1}},"version":"28.0.1"}}
+ creationTimestamp: "2024-10-16T13:28:40Z"
+ generation: 1
+ labels:
+ app.kubernetes.io/component: database
+ app.kubernetes.io/instance: druid-quickstart
+ app.kubernetes.io/managed-by: kubedb.com
+ app.kubernetes.io/name: druids.kubedb.com
+ name: druid-quickstart
+ namespace: demo
+ ownerReferences:
+ - apiVersion: kubedb.com/v1alpha2
+ blockOwnerDeletion: true
+ controller: true
+ kind: Druid
+ name: druid-quickstart
+ uid: 06dc7c5f-65ad-4310-a203-b18c0d33d662
+ resourceVersion: "45154"
+ uid: 58861709-99f9-4c78-8cf9-b5dc6534102e
+spec:
+ appRef:
+ apiGroup: kubedb.com
+ kind: Druid
+ name: druid-quickstart
+ namespace: demo
+ clientConfig:
+ caBundle: dGhpcyBpcyBub3QgYSBjZXJ0
+ service:
+ name: druid-quickstart-pods
+ port: 8888
+ scheme: http
+ url: http://druid-quickstart-coordinators-0.druid-quickstart-pods.demo.svc.cluster.local:8081,http://druid-quickstart-overlords-0.druid-quickstart-pods.demo.svc.cluster.local:8090,http://druid-quickstart-middlemanagers-0.druid-quickstart-pods.demo.svc.cluster.local:8091,http://druid-quickstart-historicals-0.druid-quickstart-pods.demo.svc.cluster.local:8083,http://druid-quickstart-brokers-0.druid-quickstart-pods.demo.svc.cluster.local:8082,http://druid-quickstart-routers-0.druid-quickstart-pods.demo.svc.cluster.local:8888
+ secret:
+ name: druid-quickstart-admin-cred
+ tlsSecret:
+ name: druid-client-cert
+ type: kubedb.com/druid
+ version: 28.0.1
+```
+Here, we are going to describe the sections of an `AppBinding` crd.
-[//]: # ()
-[//]: # (An `AppBinding` object created by `KubeDB` for PostgreSQL database is shown below,)
+### AppBinding `Spec`
-[//]: # ()
-[//]: # (```yaml)
+An `AppBinding` object has the following fields in the `spec` section:
-[//]: # (apiVersion: appcatalog.appscode.com/v1alpha1)
+#### spec.type
-[//]: # (kind: AppBinding)
+`spec.type` is an optional field that indicates the type of the app that this `AppBinding` is pointing to.
-[//]: # (metadata:)
+
+
-[//]: # ( app.kubernetes.io/instance: quick-postgres)
+#### spec.secret
-[//]: # ( app.kubernetes.io/managed-by: kubedb.com)
+`spec.secret` specifies the name of the secret which contains the credentials that are required to access the database. This secret must be in the same namespace as the `AppBinding`.
-[//]: # ( app.kubernetes.io/name: postgres)
+This secret must contain the following keys for Druid:
-[//]: # ( app.kubernetes.io/version: "10.2"-v2)
+| Key | Usage |
+| ---------- |------------------------------------------------|
+| `username` | Username of the target Druid instance. |
+| `password` | Password for the user specified by `username`. |
-[//]: # ( app.kubernetes.io/name: postgreses.kubedb.com)
-[//]: # ( app.kubernetes.io/instance: quick-postgres)
+#### spec.appRef
+appRef refers to the underlying application. It has 4 fields named `apiGroup`, `kind`, `name` & `namespace`.
-[//]: # (spec:)
+#### spec.clientConfig
-[//]: # ( type: kubedb.com/postgres)
+`spec.clientConfig` defines how to communicate with the target database. You can use either a URL or a Kubernetes service to connect with the database. You don't have to specify both of them.
-[//]: # ( secret:)
+You can configure following fields in `spec.clientConfig` section:
-[//]: # ( name: quick-postgres-auth)
+- **spec.clientConfig.url**
-[//]: # ( clientConfig:)
+ `spec.clientConfig.url` gives the location of the database, in standard URL form (i.e. `[scheme://]host:port/[path]`). This is particularly useful when the target database is running outside the Kubernetes cluster. If your database is running inside the cluster, use `spec.clientConfig.service` section instead.
-[//]: # ( service:)
+> Note that, attempting to use a user or basic auth (e.g. `user:password@host:port`) is not allowed. Stash will insert them automatically from the respective secret. Fragments ("#...") and query parameters ("?...") are not allowed either.
-[//]: # ( name: quick-postgres)
+- **spec.clientConfig.service**
-[//]: # ( path: /)
+ If you are running the database inside the Kubernetes cluster, you can use Kubernetes service to connect with the database. You have to specify the following fields in `spec.clientConfig.service` section if you manually create an `AppBinding` object.
-[//]: # ( port: 5432)
+ - **name :** `name` indicates the name of the service that connects with the target database.
+ - **scheme :** `scheme` specifies the scheme (i.e. http, https) to use to connect with the database.
+ - **port :** `port` specifies the port where the target database is running.
-[//]: # ( query: sslmode=disable)
+- **spec.clientConfig.insecureSkipTLSVerify**
-[//]: # ( scheme: postgresql)
+ `spec.clientConfig.insecureSkipTLSVerify` is used to disable TLS certificate verification while connecting with the database. We strongly discourage to disable TLS verification during backup. You should provide the respective CA bundle through `spec.clientConfig.caBundle` field instead.
-[//]: # ( secretTransforms:)
+- **spec.clientConfig.caBundle**
-[//]: # ( - renameKey:)
+ `spec.clientConfig.caBundle` is a PEM encoded CA bundle which will be used to validate the serving certificate of the database.
-[//]: # ( from: POSTGRES_USER)
+## Next Steps
-[//]: # ( to: username)
-
-[//]: # ( - renameKey:)
-
-[//]: # ( from: POSTGRES_PASSWORD)
-
-[//]: # ( to: password)
-
-[//]: # ( version: "10.2")
-
-[//]: # (```)
-
-[//]: # ()
-[//]: # (Here, we are going to describe the sections of an `AppBinding` crd.)
-
-[//]: # ()
-[//]: # (### AppBinding `Spec`)
-
-[//]: # ()
-[//]: # (An `AppBinding` object has the following fields in the `spec` section:)
-
-[//]: # ()
-[//]: # (#### spec.type)
-
-[//]: # ()
-[//]: # (`spec.type` is an optional field that indicates the type of the app that this `AppBinding` is pointing to. Stash uses this field to resolve the values of `TARGET_APP_TYPE`, `TARGET_APP_GROUP` and `TARGET_APP_RESOURCE` variables of [BackupBlueprint](https://appscode.com/products/stash/latest/concepts/crds/backupblueprint/) object.)
-
-[//]: # ()
-[//]: # (This field follows the following format: `/`. The above AppBinding is pointing to a `postgres` resource under `kubedb.com` group.)
-
-[//]: # ()
-[//]: # (Here, the variables are parsed as follows:)
-
-[//]: # ()
-[//]: # (| Variable | Usage |)
-
-[//]: # (| --------------------- | --------------------------------------------------------------------------------------------------------------------------------- |)
-
-[//]: # (| `TARGET_APP_GROUP` | Represents the application group where the respective app belongs (i.e: `kubedb.com`). |)
-
-[//]: # (| `TARGET_APP_RESOURCE` | Represents the resource under that application group that this appbinding represents (i.e: `postgres`). |)
-
-[//]: # (| `TARGET_APP_TYPE` | Represents the complete type of the application. It's simply `TARGET_APP_GROUP/TARGET_APP_RESOURCE` (i.e: `kubedb.com/postgres`). |)
-
-[//]: # ()
-[//]: # (#### spec.secret)
-
-[//]: # ()
-[//]: # (`spec.secret` specifies the name of the secret which contains the credentials that are required to access the database. This secret must be in the same namespace as the `AppBinding`.)
-
-[//]: # ()
-[//]: # (This secret must contain the following keys:)
-
-[//]: # ()
-[//]: # (PostgreSQL :)
-
-[//]: # ()
-[//]: # (| Key | Usage |)
-
-[//]: # (| ------------------- | --------------------------------------------------- |)
-
-[//]: # (| `POSTGRES_USER` | Username of the target database. |)
-
-[//]: # (| `POSTGRES_PASSWORD` | Password for the user specified by `POSTGRES_USER`. |)
-
-[//]: # ()
-[//]: # (MySQL :)
-
-[//]: # ()
-[//]: # (| Key | Usage |)
-
-[//]: # (| ---------- | ---------------------------------------------- |)
-
-[//]: # (| `username` | Username of the target database. |)
-
-[//]: # (| `password` | Password for the user specified by `username`. |)
-
-[//]: # ()
-[//]: # (MongoDB :)
-
-[//]: # ()
-[//]: # (| Key | Usage |)
-
-[//]: # (| ---------- | ---------------------------------------------- |)
-
-[//]: # (| `username` | Username of the target database. |)
-
-[//]: # (| `password` | Password for the user specified by `username`. |)
-
-[//]: # ()
-[//]: # (Elasticsearch:)
-
-[//]: # ()
-[//]: # (| Key | Usage |)
-
-[//]: # (| ---------------- | ----------------------- |)
-
-[//]: # (| `ADMIN_USERNAME` | Admin username |)
-
-[//]: # (| `ADMIN_PASSWORD` | Password for admin user |)
-
-[//]: # ()
-[//]: # (#### spec.clientConfig)
-
-[//]: # ()
-[//]: # (`spec.clientConfig` defines how to communicate with the target database. You can use either an URL or a Kubernetes service to connect with the database. You don't have to specify both of them.)
-
-[//]: # ()
-[//]: # (You can configure following fields in `spec.clientConfig` section:)
-
-[//]: # ()
-[//]: # (- **spec.clientConfig.url**)
-
-[//]: # ()
-[//]: # ( `spec.clientConfig.url` gives the location of the database, in standard URL form (i.e. `[scheme://]host:port/[path]`). This is particularly useful when the target database is running outside of the Kubernetes cluster. If your database is running inside the cluster, use `spec.clientConfig.service` section instead.)
-
-[//]: # ()
-[//]: # ( > Note that, attempting to use a user or basic auth (e.g. `user:password@host:port`) is not allowed. Stash will insert them automatically from the respective secret. Fragments ("#...") and query parameters ("?...") are not allowed either.)
-
-[//]: # ()
-[//]: # (- **spec.clientConfig.service**)
-
-[//]: # ()
-[//]: # ( If you are running the database inside the Kubernetes cluster, you can use Kubernetes service to connect with the database. You have to specify the following fields in `spec.clientConfig.service` section if you manually create an `AppBinding` object.)
-
-[//]: # ()
-[//]: # ( - **name :** `name` indicates the name of the service that connects with the target database.)
-
-[//]: # ( - **scheme :** `scheme` specifies the scheme (i.e. http, https) to use to connect with the database.)
-
-[//]: # ( - **port :** `port` specifies the port where the target database is running.)
-
-[//]: # ()
-[//]: # (- **spec.clientConfig.insecureSkipTLSVerify**)
-
-[//]: # ()
-[//]: # ( `spec.clientConfig.insecureSkipTLSVerify` is used to disable TLS certificate verification while connecting with the database. We strongly discourage to disable TLS verification during backup. You should provide the respective CA bundle through `spec.clientConfig.caBundle` field instead.)
-
-[//]: # ()
-[//]: # (- **spec.clientConfig.caBundle**)
-
-[//]: # ()
-[//]: # ( `spec.clientConfig.caBundle` is a PEM encoded CA bundle which will be used to validate the serving certificate of the database.)
-
-[//]: # (## Next Steps)
-
-[//]: # ()
-[//]: # (- Learn how to use KubeDB to manage various databases [here](/docs/guides/README.md).)
-
-[//]: # (- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).)
+- Learn how to use KubeDB to manage various databases [here](/docs/guides/README.md).
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
diff --git a/docs/guides/druid/concepts/catalog.md b/docs/guides/druid/concepts/catalog.md
deleted file mode 100644
index 57ef475dc8..0000000000
--- a/docs/guides/druid/concepts/catalog.md
+++ /dev/null
@@ -1,111 +0,0 @@
----
-title: DruidVersion CRD
-menu:
- docs_{{ .version }}:
- identifier: guides-druid-concepts-catalog
- name: DruidVersion
- parent: guides-druid-concepts
- weight: 15
-menu_name: docs_{{ .version }}
-section_menu_id: guides
----
-
-> New to KubeDB? Please start [here](/docs/README.md).
-
-# DruidVersion
-
-[//]: # (## What is DruidVersion)
-
-[//]: # ()
-[//]: # (`DruidVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for [PgBouncer](https://pgbouncer.github.io/) server deployed with KubeDB in a Kubernetes native way.)
-
-[//]: # ()
-[//]: # (When you install KubeDB, a `DruidVersion` custom resource will be created automatically for every supported PgBouncer release versions. You have to specify the name of `DruidVersion` crd in `spec.version` field of [PgBouncer](/docs/guides/pgbouncer/concepts/pgbouncer.md) crd. Then, KubeDB will use the docker images specified in the `DruidVersion` crd to create your expected PgBouncer instance.)
-
-[//]: # ()
-[//]: # (Using a separate crd for specifying respective docker image names allow us to modify the images independent of KubeDB operator. This will also allow the users to use a custom PgBouncer image for their server. For more details about how to use custom image with PgBouncer in KubeDB, please visit [here](/docs/guides/pgbouncer/custom-versions/setup.md).)
-
-[//]: # (## DruidVersion Specification)
-
-[//]: # ()
-[//]: # (As with all other Kubernetes objects, a DruidVersion needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section.)
-
-[//]: # ()
-[//]: # (```yaml)
-
-[//]: # (apiVersion: catalog.kubedb.com/v1alpha1)
-
-[//]: # (kind: DruidVersion)
-
-[//]: # (metadata:)
-
-[//]: # ( name: "1.17.0")
-
-[//]: # ( labels:)
-
-[//]: # ( app: kubedb)
-
-[//]: # (spec:)
-
-[//]: # ( deprecated: false)
-
-[//]: # ( version: "1.17.0")
-
-[//]: # ( pgBouncer:)
-
-[//]: # ( image: "${KUBEDB_CATALOG_REGISTRY}/pgbouncer:1.17.0")
-
-[//]: # ( exporter:)
-
-[//]: # ( image: "${KUBEDB_CATALOG_REGISTRY}/pgbouncer_exporter:v0.1.1")
-
-[//]: # (```)
-
-[//]: # ()
-[//]: # (### metadata.name)
-
-[//]: # ()
-[//]: # (`metadata.name` is a required field that specifies the name of the `DruidVersion` crd. You have to specify this name in `spec.version` field of [PgBouncer](/docs/guides/pgbouncer/concepts/pgbouncer.md) crd.)
-
-[//]: # ()
-[//]: # (We follow this convention for naming DruidVersion crd:)
-
-[//]: # ()
-[//]: # (- Name format: `{Original pgbouncer image version}-{modification tag}`)
-
-[//]: # ()
-[//]: # (We plan to modify original PgBouncer docker images to support additional features. Re-tagging the image with v1, v2 etc. modification tag helps separating newer iterations from the older ones. An image with higher modification tag will have more features than the images with lower modification tag. Hence, it is recommended to use DruidVersion crd with highest modification tag to take advantage of the latest features.)
-
-[//]: # ()
-[//]: # (### spec.version)
-
-[//]: # ()
-[//]: # (`spec.version` is a required field that specifies the original version of PgBouncer that has been used to build the docker image specified in `spec.server.image` field.)
-
-[//]: # ()
-[//]: # (### spec.deprecated)
-
-[//]: # ()
-[//]: # (`spec.deprecated` is an optional field that specifies whether the docker images specified here is supported by the current KubeDB operator.)
-
-[//]: # ()
-[//]: # (The default value of this field is `false`. If `spec.deprecated` is set `true`, KubeDB operator will not create the server and other respective resources for this version.)
-
-[//]: # ()
-[//]: # (### spec.pgBouncer.image)
-
-[//]: # ()
-[//]: # (`spec.pgBouncer.image` is a required field that specifies the docker image which will be used to create Petset by KubeDB operator to create expected PgBouncer server.)
-
-[//]: # ()
-[//]: # (### spec.exporter.image)
-
-[//]: # ()
-[//]: # (`spec.exporter.image` is a required field that specifies the image which will be used to export Prometheus metrics.)
-
-[//]: # (## Next Steps)
-
-[//]: # ()
-[//]: # (- Learn about PgBouncer crd [here](/docs/guides/pgbouncer/concepts/catalog.md).)
-
-[//]: # (- Deploy your first PgBouncer server with KubeDB by following the guide [here](/docs/guides/pgbouncer/quickstart/quickstart.md).)
\ No newline at end of file
diff --git a/docs/guides/druid/concepts/druid.md b/docs/guides/druid/concepts/druid.md
index 41824e7cfe..5d3ad66d29 100644
--- a/docs/guides/druid/concepts/druid.md
+++ b/docs/guides/druid/concepts/druid.md
@@ -12,357 +12,518 @@ section_menu_id: guides
> New to KubeDB? Please start [here](/docs/README.md).
-
# Druid
-[//]: # ()
-[//]: # (## What is PgBouncer)
-
-[//]: # ()
-[//]: # (`PgBouncer` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [PgBouncer](https://www.pgbouncer.github.io/) in a Kubernetes native way. You only need to describe the desired configurations in a `PgBouncer` object, and the KubeDB operator will create Kubernetes resources in the desired state for you.)
-
-[//]: # ()
-[//]: # (## PgBouncer Spec)
-
-[//]: # ()
-[//]: # (Like any official Kubernetes resource, a `PgBouncer` object has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections.)
-
-[//]: # ()
-[//]: # (Below is an example PgBouncer object.)
-
-[//]: # ()
-[//]: # (```yaml)
-
-[//]: # (apiVersion: kubedb.com/v1alpha2)
-
-[//]: # (kind: PgBouncer)
-
-[//]: # (metadata:)
-
-[//]: # ( name: pgbouncer-server)
-
-[//]: # ( namespace: demo)
-
-[//]: # (spec:)
-
-[//]: # ( version: "1.18.0")
-
-[//]: # ( replicas: 2)
-
-[//]: # ( databases:)
-
-[//]: # ( - alias: "postgres")
-
-[//]: # ( databaseName: "postgres")
-
-[//]: # ( databaseRef:)
-
-[//]: # ( name: "quick-postgres")
-
-[//]: # ( namespace: demo)
-
-[//]: # ( connectionPool:)
-
-[//]: # ( maxClientConnections: 20)
-
-[//]: # ( reservePoolSize: 5)
-
-[//]: # ( monitor:)
-
-[//]: # ( agent: prometheus.io/operator)
-
-[//]: # ( prometheus:)
-
-[//]: # ( serviceMonitor:)
-
-[//]: # ( labels:)
-
-[//]: # ( release: prometheus)
-
-[//]: # ( interval: 10s)
-
-[//]: # (```)
-
-[//]: # ()
-[//]: # (### spec.version)
-
-[//]: # ()
-[//]: # (`spec.version` is a required field that specifies the name of the [PgBouncerVersion](/docs/guides/pgbouncer/concepts/catalog.md) crd where the docker images are specified. Currently, when you install KubeDB, it creates the following `PgBouncerVersion` resources,)
-
-[//]: # ()
-[//]: # (- `1.18.0`)
-
-[//]: # ()
-[//]: # (### spec.replicas)
-
-[//]: # ()
-[//]: # (`spec.replicas` specifies the total number of available pgbouncer server nodes for each crd. KubeDB uses `PodDisruptionBudget` to ensure that majority of the replicas are available during [voluntary disruptions](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions).)
-
-[//]: # ()
-[//]: # (### spec.databases)
-
-[//]: # ()
-[//]: # (`spec.databases` specifies an array of postgres databases that pgbouncer should add to its connection pool. It contains three `required` fields and two `optional` fields for each database connection.)
-
-[//]: # ()
-[//]: # (- `spec.databases.alias`: specifies an alias for the target database located in a postgres server specified by an appbinding.)
-
-[//]: # (- `spec.databases.databaseName`: specifies the name of the target database.)
-
-[//]: # (- `spec.databases.databaseRef`: specifies the name and namespace of the AppBinding that contains the path to a PostgreSQL server where the target database can be found.)
-
-[//]: # ()
-[//]: # (ConnectionPool is used to configure pgbouncer connection-pool. All the fields here are accompanied by default values and can be left unspecified if no customisation is required by the user.)
-
-[//]: # ()
-[//]: # (- `spec.connectionPool.port`: specifies the port on which pgbouncer should listen to connect with clients. The default is 5432.)
-
-[//]: # ()
-[//]: # (- `spec.connectionPool.poolMode`: specifies the value of pool_mode. Specifies when a server connection can be reused by other clients.)
-
-[//]: # ()
-[//]: # ( - session)
-
-[//]: # ()
-[//]: # ( Server is released back to pool after client disconnects. Default.)
-
-[//]: # ()
-[//]: # ( - transaction)
-
-[//]: # ()
-[//]: # ( Server is released back to pool after transaction finishes.)
+## What is Druid
+
+`Druid` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [Druid](https://druid.apache.org/) in a Kubernetes native way. You only need to describe the desired database configuration in a `Druid`object, and the KubeDB operator will create Kubernetes objects in the desired state for you.
+
+## Druid Spec
+
+As with all other Kubernetes objects, a Druid needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. Below is an example Druid object.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid
+ namespace: demo
+spec:
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ metadataStorage:
+ type: PostgreSQL
+ name: pg-demo
+ namespace: demo
+ externallyManaged: true
+ zookeeperRef:
+ name: zk-demo
+ namespace: demo
+ externallyManaged: true
+ authSecret:
+ name: druid-admin-cred
+ configSecret:
+ name: druid-custom-config
+ enableSSL: true
+ healthChecker:
+ failureThreshold: 3
+ periodSeconds: 20
+ timeoutSeconds: 10
+ keystoreCredSecret:
+ name: druid-keystore-cred
+ deletionPolicy: DoNotTerminate
+ tls:
+ certificates:
+ - alias: server
+ secretName: druid-server-cert
+ - alias: client
+ secretName: druid-client-cert
+ issuerRef:
+ apiGroup: cert-manager.io
+ kind: Issuer
+ name: druid-ca-issuer
+ topology:
+ coordinators:
+ podTemplate:
+ spec:
+ containers:
+ - name: druid
+ resources:
+ requests:
+ cpu: 500m
+ memory: 1024Mi
+ limits:
+ cpu: 700m
+ memory: 2Gi
+ overlords:
+ podTemplate:
+ spec:
+ containers:
+ - name: druid
+ resources:
+ requests:
+ cpu: 500m
+ memory: 1024Mi
+ limits:
+ cpu: 700m
+ memory: 2Gi
+ brokers:
+ podTemplate:
+ spec:
+ containers:
+ - name: druid
+ resources:
+ requests:
+ cpu: 500m
+ memory: 1024Mi
+ limits:
+ cpu: 700m
+ memory: 2Gi
+ routers:
+ podTemplate:
+ spec:
+ containers:
+ - name: druid
+ resources:
+ requests:
+ cpu: 500m
+ memory: 1024Mi
+ limits:
+ cpu: 700m
+ memory: 2Gi
+ middleManagers:
+ podTemplate:
+ spec:
+ containers:
+ - name: druid
+ resources:
+ requests:
+ cpu: 500m
+ memory: 1024Mi
+ limits:
+ cpu: 700m
+ memory: 2Gi
+ storageType: Durable
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ storageClassName: standard
+ historicals:
+ podTemplate:
+ spec:
+ containers:
+ - name: druid
+ resources:
+ requests:
+ cpu: 500m
+ memory: 1024Mi
+ limits:
+ cpu: 700m
+ memory: 2Gi
+ storageType: Durable
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ storageClassName: standard
+ monitor:
+ agent: prometheus.io/operator
+ prometheus:
+ exporter:
+ port: 56790
+ serviceMonitor:
+ labels:
+ release: prometheus
+ interval: 10s
+ version: 30.0.0
+```
+
+### spec.version
+
+`spec.version` is a required field specifying the name of the [DruidVersion](/docs/guides/druid/concepts/druidversion.md) crd where the docker images are specified. Currently, when you install KubeDB, it creates the following `Druid` resources,
+
+- `28.0.1`
+- `30.0.0`
+
+### spec.replicas
+
+`spec.replicas` the number of members in Druid replicaset.
+
+If `spec.topology` is set, then `spec.replicas` needs to be empty. Instead use `spec.topology.controller.replicas` and `spec.topology.broker.replicas`. You need to set both of them for topology clustering.
+
+KubeDB uses `PodDisruptionBudget` to ensure that majority of these replicas are available during [voluntary disruptions](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions) so that quorum is maintained.
+
+### spec.authSecret
+
+`spec.authSecret` is an optional field that points to a Secret used to hold credentials for `druid` admin user. If not set, KubeDB operator creates a new Secret `{druid-object-name}-auth` for storing the password for `admin` user for each Druid object.
+
+We can use this field in 3 mode.
+1. Using an external secret. In this case, You need to create an auth secret first with required fields, then specify the secret name when creating the Druid object using `spec.authSecret.name` & set `spec.authSecret.externallyManaged` to true.
+```yaml
+authSecret:
+ name:
+ externallyManaged: true
+```
+
+2. Specifying the secret name only. In this case, You need to specify the secret name when creating the Druid object using `spec.authSecret.name`. `externallyManaged` is by default false.
+```yaml
+authSecret:
+ name:
+```
+
+3. Let KubeDB do everything for you. In this case, no work for you.
+
+AuthSecret contains a `user` key and a `password` key which contains the `username` and `password` respectively for Druid `admin` user.
+
+Example:
+
+```bash
+$ kubectl create secret generic druid-auth -n demo \
+--from-literal=username=jhon-doe \
+--from-literal=password=6q8u_2jMOW-OOZXk
+secret "druid-auth" created
+```
-[//]: # ()
-[//]: # ( - statement)
+```yaml
+apiVersion: v1
+data:
+ password: NnE4dV8yak1PVy1PT1pYaw==
+ username: amhvbi1kb2U=
+kind: Secret
+metadata:
+ name: druid-auth
+ namespace: demo
+type: Opaque
+```
-[//]: # ()
-[//]: # ( Server is released back to pool after query finishes. Long transactions spanning multiple statements are disallowed in this mode.)
+Secrets provided by users are not managed by KubeDB, and therefore, won't be modified or garbage collected by the KubeDB operator (version 0.13.0 and higher).
-[//]: # ()
-[//]: # (- `spec.connectionPool.maxClientConnections`: specifies the value of max_client_conn. When increased then the file descriptor limits should also be increased. Note that actual number of file descriptors used is more than max_client_conn. Theoretical maximum used is:)
+### spec.configSecret
-[//]: # ()
-[//]: # ( ```bash)
+`spec.configSecret` is an optional field that points to a Secret used to hold custom Druid configuration. If not set, KubeDB operator will use default configuration for Druid.
-[//]: # ( max_client_conn + (max pool_size * total databases * total users))
+### spec.topology
-[//]: # ( ```)
+`spec.topology` represents the topology configuration for Druid cluster in KRaft mode.
-[//]: # ()
-[//]: # ( if each user connects under its own username to server. If a database user is specified in connect string (all users connect under same username), the theoretical maximum is:)
+When `spec.topology` is set, the following fields needs to be empty, otherwise validating webhook will throw error.
-[//]: # ()
-[//]: # ( ```bash)
+- `spec.replicas`
+- `spec.podTemplate`
+- `spec.storage`
-[//]: # ( max_client_conn + (max pool_size * total databases))
+#### spec.topology.coordinators
-[//]: # ( ```)
+`coordinators` represents configuration for coordinators node of Druid. It is a mandatory node. So, if not mentioned in the `YAML`, this node will be initialized by `KubeDB` operator.
-[//]: # ()
-[//]: # ( The theoretical maximum should be never reached, unless somebody deliberately crafts special load for it. Still, it means you should set the number of file descriptors to a safely high number.)
+Available configurable fields:
-[//]: # ()
-[//]: # ( Search for `ulimit` in your favorite shell man page. Note: `ulimit` does not apply in a Windows environment.)
+- `topology.coordinators`:
+ - `replicas` (`: "1"`) - is an `optional` field to specify the number of nodes (ie. pods ) that act as the dedicated Druid `coordinators` pods. Defaults to `1`.
+ - `suffix` (`: "coordinators"`) - is an `optional` field that is added as the suffix of the coordinators PetSet name. Defaults to `coordinators`.
+ - `resources` (`: "cpu: 500m, memory: 1Gi" `) - is an `optional` field that specifies how much computational resources to request or to limit for each of the `coordinators` pods.
-[//]: # ()
-[//]: # ( Default: 100)
+#### spec.topology.overlords
-[//]: # ()
-[//]: # (- `spec.connectionPool.defaultPoolSize`: specifies the value of default_pool_size. Used to determine how many server connections to allow per user/database pair. Can be overridden in the per-database configuration.)
+`overlords` represents configuration for overlords node of Druid. It is an optional node. So, it is only going to be deployed by the `KubeDB` operator if explicitly mentioned in the `YAML`. Otherwise, `coordinators` node will act as `overlords`.
-[//]: # ()
-[//]: # ( Default: 20)
+Available configurable fields:
-[//]: # ()
-[//]: # (- `spec.connectionPool.minPoolSize`: specifies the value of min_pool_size. PgBouncer adds more server connections to pool if below this number. Improves behavior when usual load comes suddenly back after period of total inactivity.)
+- `topology.overlords`:
+ - `replicas` (`: "1"`) - is an `optional` field to specify the number of nodes (ie. pods ) that act as the dedicated Druid `overlords` pods. Defaults to `1`.
+ - `suffix` (`: "overlords"`) - is an `optional` field that is added as the suffix of the overlords PetSet name. Defaults to `overlords`.
+ - `resources` (`: "cpu: 500m, memory: 1Gi" `) - is an `optional` field that specifies how much computational resources to request or to limit for each of the `overlords` pods.
-[//]: # ()
-[//]: # ( Default: 0 (disabled))
+#### spec.topology.brokers
-[//]: # ()
-[//]: # (- `spec.connectionPool.reservePoolSize`: specifies the value of reserve_pool_size. Used to determine how many additional connections to allow to a pool. 0 disables.)
+`brokers` represents configuration for brokers node of Druid. It is a mandatory node. So, if not mentioned in the `YAML`, this node will be initialized by `KubeDB` operator.
-[//]: # ()
-[//]: # ( Default: 0 (disabled))
+Available configurable fields:
-[//]: # ()
-[//]: # (- `spec.connectionPool.reservePoolTimeout`: specifies the value of reserve_pool_timeout. If a client has not been serviced in this many seconds, pgbouncer enables use of additional connections from reserve pool. 0 disables.)
+- `topology.brokers`:
+ - `replicas` (`: "1"`) - is an `optional` field to specify the number of nodes (ie. pods ) that act as the dedicated Druid `brokers` pods. Defaults to `1`.
+ - `suffix` (`: "brokers"`) - is an `optional` field that is added as the suffix of the brokers PetSet name. Defaults to `brokers`.
+ - `resources` (`: "cpu: 500m, memory: 1Gi" `) - is an `optional` field that specifies how much computational resources to request or to limit for each of the `brokers` pods.
-[//]: # ()
-[//]: # ( Default: 5.0)
+#### spec.topology.routers
-[//]: # ()
-[//]: # (- `spec.connectionPool.maxDbConnections`: specifies the value of max_db_connections. PgBouncer does not allow more than this many connections per-database (regardless of pool - i.e. user). It should be noted that when you hit the limit, closing a client connection to one pool will not immediately allow a server connection to be established for another pool, because the server connection for the first pool is still open. Once the server connection closes (due to idle timeout), a new server connection will immediately be opened for the waiting pool.)
+`routers` represents configuration for routers node of Druid. It is an optional node. So, it is only going to be deployed by the `KubeDB` operator if explicitly mentioned in the `YAML`. Otherwise, `coordinators` node will act as `routers`.
-[//]: # ()
-[//]: # ( Default: unlimited)
+Available configurable fields:
-[//]: # ()
-[//]: # (- `spec.connectionPool.maxUserConnections`: specifies the value of max_user_connections. PgBouncer does not allow more than this many connections per-user (regardless of pool - i.e. user). It should be noted that when you hit the limit, closing a client connection to one pool will not immediately allow a server connection to be established for another pool, because the server connection for the first pool is still open. Once the server connection closes (due to idle timeout), a new server connection will immediately be opened for the waiting pool.)
+- `topology.routers`:
+ - `replicas` (`: "1"`) - is an `optional` field to specify the number of nodes (ie. pods ) that act as the dedicated Druid `routers` pods. Defaults to `1`.
+ - `suffix` (`: "routers"`) - is an `optional` field that is added as the suffix of the routers PetSet name. Defaults to `routers`.
+ - `resources` (`: "cpu: 500m, memory: 1Gi" `) - is an `optional` field that specifies how much computational resources to request or to limit for each of the `routers` pods.
-[//]: # ( Default: unlimited)
+#### spec.topology.historicals
-[//]: # ()
-[//]: # (- `spec.connectionPool.statsPeriod`: sets how often the averages shown in various `SHOW` commands are updated and how often aggregated statistics are written to the log.)
+`historicals` represents configuration for historicals node of Druid. It is a mandatory node. So, if not mentioned in the `YAML`, this node will be initialized by `KubeDB` operator.
-[//]: # ( Default: 60)
+Available configurable fields:
-[//]: # ()
-[//]: # (- `spec.connectionPool.authType`: specifies how to authenticate users. PgBouncer supports several authentication methods including pam, md5, scram-sha-256, trust , or any. However hba, and cert are not supported.)
+- `topology.historicals`:
+ - `replicas` (`: "1"`) - is an `optional` field to specify the number of nodes (ie. pods ) that act as the dedicated Druid `historicals` pods. Defaults to `1`.
+ - `suffix` (`: "historicals"`) - is an `optional` field that is added as the suffix of the controller PetSet name. Defaults to `historicals`.
+ - `storage` is a `required` field that specifies how much storage to claim for each of the `historicals` pods.
+ - `resources` (`: "cpu: 500m, memory: 1Gi" `) - is an `optional` field that specifies how much computational resources to request or to limit for each of the `historicals` pods.
-[//]: # ()
-[//]: # (- `spec.connectionPool.IgnoreStartupParameters`: specifies comma-separated startup parameters that pgbouncer knows are handled by admin and it can ignore them.)
+#### spec.topology.middleManagers
-[//]: # ()
-[//]: # (### spec.monitor)
+`middleManagers` represents configuration for middleManagers node of Druid. It is a mandatory node. So, if not mentioned in the `YAML`, this node will be initialized by `KubeDB` operator.
-[//]: # ()
-[//]: # (PgBouncer managed by KubeDB can be monitored with builtin-Prometheus and Prometheus operator out-of-the-box. To learn more,)
+Available configurable fields:
-[//]: # ()
-[//]: # (- [Monitor PgBouncer with builtin Prometheus](/docs/guides/pgbouncer/monitoring/using-builtin-prometheus.md))
+- `topology.middleManagers`:
+ - `replicas` (`: "1"`) - is an `optional` field to specify the number of nodes (ie. pods ) that act as the dedicated Druid `middleManagers` pods. Defaults to `1`.
+ - `suffix` (`: "middleManagers"`) - is an `optional` field that is added as the suffix of the controller PetSet name. Defaults to `middleManagers`.
+ - `storage` is a `required` field that specifies how much storage to claim for each of the `middleManagers` pods.
+ - `resources` (`: "cpu: 500m, memory: 1Gi" `) - is an `optional` field that specifies how much computational resources to request or to limit for each of the `middleManagers` pods.
-[//]: # (- [Monitor PgBouncer with Prometheus operator](/docs/guides/pgbouncer/monitoring/using-prometheus-operator.md))
-[//]: # ()
-[//]: # (### spec.podTemplate)
+### spec.enableSSL
-[//]: # ()
-[//]: # (KubeDB allows providing a template for pgbouncer pods through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the PetSet created for PgBouncer server)
+`spec.enableSSL` is an `optional` field that specifies whether to enable TLS to HTTP layer. The default value of this field is `false`.
-[//]: # ()
-[//]: # (KubeDB accept following fields to set in `spec.podTemplate:`)
+```yaml
+spec:
+ enableSSL: true
+```
-[//]: # ()
-[//]: # (- metadata)
+### spec.tls
-[//]: # ( - annotations (pod's annotation))
+`spec.tls` specifies the TLS/SSL configurations. The KubeDB operator supports TLS management by using the [cert-manager](https://cert-manager.io/). Currently, the operator only supports the `PKCS#8` encoded certificates.
-[//]: # (- controller)
+```yaml
+spec:
+ tls:
+ issuerRef:
+ apiGroup: "cert-manager.io"
+ kind: Issuer
+ name: druid-issuer
+ certificates:
+ - alias: server
+ privateKey:
+ encoding: PKCS8
+ secretName: druid-client-cert
+ subject:
+ organizations:
+ - kubedb
+ - alias: http
+ privateKey:
+ encoding: PKCS8
+ secretName: druid-server-cert
+ subject:
+ organizations:
+ - kubedb
+```
-[//]: # ( - annotations (petset's annotation))
+The `spec.tls` contains the following fields:
-[//]: # (- spec:)
+- `tls.issuerRef` - is an `optional` field that references to the `Issuer` or `ClusterIssuer` custom resource object of [cert-manager](https://cert-manager.io/docs/concepts/issuer/). It is used to generate the necessary certificate secrets for Druid. If the `issuerRef` is not specified, the operator creates a self-signed CA and also creates necessary certificate (valid: 365 days) secrets using that CA.
+ - `apiGroup` - is the group name of the resource that is being referenced. Currently, the only supported value is `cert-manager.io`.
+ - `kind` - is the type of resource that is being referenced. The supported values are `Issuer` and `ClusterIssuer`.
+ - `name` - is the name of the resource ( `Issuer` or `ClusterIssuer` ) that is being referenced.
-[//]: # ( - env)
+- `tls.certificates` - is an `optional` field that specifies a list of certificate configurations used to configure the certificates. It has the following fields:
+ - `alias` - represents the identifier of the certificate. It has the following possible value:
+ - `server` - is used for the server certificate configuration.
+ - `client` - is used for the client certificate configuration.
-[//]: # ( - resources)
+ - `secretName` - ( `string` | `"-alias-cert"` ) - specifies the k8s secret name that holds the certificates.
-[//]: # ( - initContainers)
+ - `subject` - specifies an `X.509` distinguished name (DN). It has the following configurable fields:
+ - `organizations` ( `[]string` | `nil` ) - is a list of organization names.
+ - `organizationalUnits` ( `[]string` | `nil` ) - is a list of organization unit names.
+ - `countries` ( `[]string` | `nil` ) - is a list of country names (ie. Country Codes).
+ - `localities` ( `[]string` | `nil` ) - is a list of locality names.
+ - `provinces` ( `[]string` | `nil` ) - is a list of province names.
+ - `streetAddresses` ( `[]string` | `nil` ) - is a list of street addresses.
+ - `postalCodes` ( `[]string` | `nil` ) - is a list of postal codes.
+ - `serialNumber` ( `string` | `""` ) is a serial number.
-[//]: # ( - imagePullSecrets)
+ For more details, visit [here](https://golang.org/pkg/crypto/x509/pkix/#Name).
-[//]: # ( - affinity)
+ - `duration` ( `string` | `""` ) - is the period during which the certificate is valid. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as `"300m"`, `"1.5h"` or `"20h45m"`. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
+ - `renewBefore` ( `string` | `""` ) - is a specifiable time before expiration duration.
+ - `dnsNames` ( `[]string` | `nil` ) - is a list of subject alt names.
+ - `ipAddresses` ( `[]string` | `nil` ) - is a list of IP addresses.
+ - `uris` ( `[]string` | `nil` ) - is a list of URI Subject Alternative Names.
+ - `emailAddresses` ( `[]string` | `nil` ) - is a list of email Subject Alternative Names.
-[//]: # ( - tolerations)
-[//]: # ( - priorityClassName)
+### spec..storageType
-[//]: # ( - priority)
+`spec.storageType` is an optional field that specifies the type of storage to use for database. It can be either `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create Druid cluster using [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) volume.
-[//]: # ( - lifecycle)
+### spec..storage
-[//]: # ()
-[//]: # (Usage of some fields in `spec.podTemplate` is described below,)
+If you set `spec..storageType:` to `Durable`, then `spec..storage` is a required field that specifies the StorageClass of PVCs dynamically allocated to store data for the database. This storage spec will be passed to the PetSet created by KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests.
-[//]: # ()
-[//]: # (#### spec.podTemplate.spec.env)
+- `spec..storage.storageClassName` is the name of the StorageClass used to provision PVCs. PVCs don’t necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster depending on whether the DefaultStorageClass admission plugin is turned on.
+- `spec..storage.accessModes` uses the same conventions as Kubernetes PVCs when requesting storage with specific access modes.
+- `spec..storage.resources` can be used to request specific quantities of storage. This follows the same resource model used by PVCs.
-[//]: # ()
-[//]: # (`spec.podTemplate.spec.env` is an optional field that specifies the environment variables to pass to the PgBouncer docker image. To know about supported environment variables, please visit [here](https://hub.docker.com/kubedb/pgbouncer/).)
+To learn how to configure `spec..storage`, please visit the links below:
-[//]: # ()
-[//]: # (Also, note that KubeDB does not allow updates to the environment variables as updating them does not have any effect once the server is created. If you try to update environment variables, KubeDB operator will reject the request with following error,)
+- https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims
-[//]: # ()
-[//]: # (```ini)
+### spec.monitor
-[//]: # (Error from server (BadRequest): error when applying patch:)
+Druid managed by KubeDB can be monitored with Prometheus operator out-of-the-box. To learn more,
+- [Monitor Apache Druid with Prometheus operator](/docs/guides/druid/monitoring/using-prometheus-operator.md)
+- [Monitor Apache Druid with Built-in Prometheus](/docs/guides/druid/monitoring/using-builtin-prometheus.md)
-[//]: # (...)
+### spec..podTemplate
-[//]: # (for: "./pgbouncer.yaml": admission webhook "pgbouncer.validators.kubedb.com" denied the request: precondition failed for:)
+KubeDB allows providing a template for database pod through `spec..podTemplate`. KubeDB operator will pass the information provided in `spec..podTemplate` to the PetSet created for Druid cluster.
-[//]: # (...)
+KubeDB accept following fields to set in `spec..podTemplate:`
-[//]: # (At least one of the following was changed:)
+- metadata:
+ - annotations (pod's annotation)
+ - labels (pod's labels)
+- controller:
+ - annotations (petset's annotation)
+ - labels (petset's labels)
+- spec:
+ - containers
+ - volumes
+ - podPlacementPolicy
+ - initContainers
+ - containers
+ - imagePullSecrets
+ - nodeSelector
+ - affinity
+ - serviceAccountName
+ - schedulerName
+ - tolerations
+ - priorityClassName
+ - priority
+ - securityContext
+ - livenessProbe
+ - readinessProbe
+ - lifecycle
-[//]: # ( apiVersion)
+You can check out the full list [here](https://github.com/kmodules/offshoot-api/blob/master/api/v2/types.go#L26C1-L279C1).
+Uses of some field of `spec..podTemplate` is described below,
-[//]: # ( kind)
+#### spec..podTemplate.spec.tolerations
-[//]: # ( name)
+The `spec.podTemplate.spec.tolerations` is an optional field. This can be used to specify the pod's tolerations.
-[//]: # ( namespace)
+#### spec..podTemplate.spec.volumes
-[//]: # ( spec.podTemplate.spec.nodeSelector)
+The `spec..podTemplate..volumes` is an optional field. This can be used to provide the list of volumes that can be mounted by containers belonging to the pod.
-[//]: # (```)
+#### spec..podTemplate.spec.podPlacementPolicy
-[//]: # ()
-[//]: # (#### spec.podTemplate.spec.imagePullSecrets)
+`spec..podTemplate.spec.podPlacementPolicy` is an optional field. This can be used to provide the reference of the podPlacementPolicy. This will be used by our Petset controller to place the db pods throughout the region, zone & nodes according to the policy. It utilizes kubernetes affinity & podTopologySpreadContraints feature to do so.
-[//]: # ()
-[//]: # (`spec.podTemplate.spec.imagePullSecrets` is an optional field that points to secrets to be used for pulling docker image if you are using a private docker registry. For more details on how to use private docker registry, please visit [here](/docs/guides/pgbouncer/private-registry/using-private-registry.md).)
+#### spec..podTemplate.spec.nodeSelector
-[//]: # ()
-[//]: # (#### spec.podTemplate.spec.nodeSelector)
+`spec..podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) .
-[//]: # ()
-[//]: # (`spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) .)
+### spec.serviceTemplates
-[//]: # ()
-[//]: # (#### spec.podTemplate.spec.resources)
+You can also provide template for the services created by KubeDB operator for Druid cluster through `spec.serviceTemplates`. This will allow you to set the type and other properties of the services.
-[//]: # ()
-[//]: # (`spec.podTemplate.spec.resources` is an optional field. This can be used to request compute resources required by the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/).)
+KubeDB allows following fields to set in `spec.serviceTemplates`:
+- `alias` represents the identifier of the service. It has the following possible value:
+ - `stats` for is used for the `exporter` service identification.
-[//]: # ()
-[//]: # (### spec.serviceTemplate)
+Druid comes with four services for `coordinators`, `overlords`, `routers` and `brokers`. There are two options for providing serviceTemplates:
+ - To provide `serviceTemplates` for a specific service, the `serviceTemplates.ports.port` should be equal to the port of that service and `serviceTemplate` will be used for that particular service only.
+ - However, to provide a common `serviceTemplates`, `serviceTemplates.ports.port` should be empty.
-[//]: # ()
-[//]: # (KubeDB creates a service for each PgBouncer instance. The service has the same name as the `pgbouncer.name` and points to pgbouncer pods.)
+- metadata:
+ - labels
+ - annotations
+- spec:
+ - type
+ - ports
+ - clusterIP
+ - externalIPs
+ - loadBalancerIP
+ - loadBalancerSourceRanges
+ - externalTrafficPolicy
+ - healthCheckNodePort
+ - sessionAffinityConfig
-[//]: # ()
-[//]: # (You can provide template for this service using `spec.serviceTemplate`. This will allow you to set the type and other properties of the service. If `spec.serviceTemplate` is not provided, KubeDB will create a service of type `ClusterIP` with minimal settings.)
+See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.21.1/api/v1/types.go#L237) to understand these fields in detail.
-[//]: # ()
-[//]: # (KubeDB allows the following fields to set in `spec.serviceTemplate`:)
-[//]: # ()
-[//]: # (- metadata:)
+#### spec..podTemplate.spec.containers
-[//]: # ( - annotations)
+The `spec..podTemplate.spec.containers` can be used to provide the list containers and their configurations for to the database pod. some of the fields are described below,
-[//]: # (- spec:)
+##### spec..podTemplate.spec.containers[].name
+The `spec..podTemplate.spec.containers[].name` field used to specify the name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.
-[//]: # ( - type)
+##### spec..podTemplate.spec.containers[].args
+`spec..podTemplate.spec.containers[].args` is an optional field. This can be used to provide additional arguments to database installation.
-[//]: # ( - ports)
+##### spec..podTemplate.spec.containers[].env
-[//]: # ( - clusterIP)
+`spec..podTemplate.spec.containers[].env` is an optional field that specifies the environment variables to pass to the Redis containers.
-[//]: # ( - externalIPs)
+##### spec..podTemplate.spec.containers[].resources
-[//]: # ( - loadBalancerIP)
+`spec..podTemplate.spec.containers[].resources` is an optional field. This can be used to request compute resources required by containers of the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/).
-[//]: # ( - loadBalancerSourceRanges)
+### spec.deletionPolicy
-[//]: # ( - externalTrafficPolicy)
+`deletionPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `Druid` crd or which resources KubeDB should keep or delete when you delete `Druid` crd. KubeDB provides following four deletion policies:
-[//]: # ( - healthCheckNodePort)
+- DoNotTerminate
+- WipeOut
+- Halt
+- Delete
-[//]: # ( - sessionAffinityConfig)
+When `deletionPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, `DoNotTerminate` prevents users from deleting the database as long as the `spec.deletionPolicy` is set to `DoNotTerminate`.
-[//]: # ()
-[//]: # (See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.16.3/api/v1/types.go#L163) to understand these fields in detail.)
+## spec.healthChecker
+It defines the attributes for the health checker.
+- `spec.healthChecker.periodSeconds` specifies how often to perform the health check.
+- `spec.healthChecker.timeoutSeconds` specifies the number of seconds after which the probe times out.
+- `spec.healthChecker.failureThreshold` specifies minimum consecutive failures for the healthChecker to be considered failed.
+- `spec.healthChecker.disableWriteCheck` specifies whether to disable the writeCheck or not.
-[//]: # ()
-[//]: # (## Next Steps)
+Know details about KubeDB Health checking from this [blog post](https://appscode.com/blog/post/kubedb-health-checker/).
-[//]: # ()
-[//]: # (- Learn how to use KubeDB to run a PostgreSQL database [here](/docs/guides/postgres/README.md).)
+## Next Steps
-[//]: # (- Learn how to how to get started with PgBouncer [here](/docs/guides/pgbouncer/quickstart/quickstart.md).)
+- Learn how to use KubeDB to run Apache Druid cluster [here](/docs/guides/druid/README.md).
+- Deploy [dedicated topology cluster](/docs/guides/druid/clustering/guide/index.md) for Apache Druid
+- Monitor your Druid cluster with KubeDB using [`out-of-the-box` Prometheus operator](/docs/guides/druid/monitoring/using-prometheus-operator.md).
+- Detail concepts of [DruidVersion object](/docs/guides/druid/concepts/druidversion.md).
-[//]: # (- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).)
+[//]: # (- Learn to use KubeDB managed Druid objects using [CLIs](/docs/guides/druid/cli/cli.md).)
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
diff --git a/docs/guides/druid/concepts/druidautoscaler.md b/docs/guides/druid/concepts/druidautoscaler.md
new file mode 100644
index 0000000000..b22504ce2f
--- /dev/null
+++ b/docs/guides/druid/concepts/druidautoscaler.md
@@ -0,0 +1,132 @@
+---
+title: DruidAutoscaler CRD
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-concepts-druidautoscaler
+ name: DruidAutoscaler
+ parent: guides-druid-concepts
+ weight: 50
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# DruidAutoscaler
+
+## What is DruidAutoscaler
+
+`DruidAutoscaler` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for autoscaling [Druid](https://druid.apache.org/) compute resources and storage of database components in a Kubernetes native way.
+
+## DruidAutoscaler CRD Specifications
+
+Like any official Kubernetes resource, a `DruidAutoscaler` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections.
+
+Here, some sample `DruidAutoscaler` CROs for autoscaling different components of database is given below:
+
+**Sample `DruidAutoscaler` for `druid` cluster:**
+
+```yaml
+apiVersion: autoscaling.kubedb.com/v1alpha1
+kind: DruidAutoscaler
+metadata:
+ name: dr-autoscaler
+ namespace: demo
+spec:
+ databaseRef:
+ name: druid-prod
+ opsRequestOptions:
+ timeout: 3m
+ apply: IfReady
+ compute:
+ coordinators:
+ trigger: "On"
+ podLifeTimeThreshold: 24h
+ minAllowed:
+ cpu: 200m
+ memory: 300Mi
+ maxAllowed:
+ cpu: 1
+ memory: 1Gi
+ controlledResources: ["cpu", "memory"]
+ containerControlledValues: "RequestsAndLimits"
+ resourceDiffPercentage: 10
+ brokers:
+ trigger: "On"
+ podLifeTimeThreshold: 24h
+ minAllowed:
+ cpu: 200m
+ memory: 300Mi
+ maxAllowed:
+ cpu: 1
+ memory: 1Gi
+ controlledResources: ["cpu", "memory"]
+ containerControlledValues: "RequestsAndLimits"
+ resourceDiffPercentage: 10
+ storage:
+ historicals:
+ expansionMode: "Online"
+ trigger: "On"
+ usageThreshold: 60
+ scalingThreshold: 50
+ middleManagers:
+ expansionMode: "Online"
+ trigger: "On"
+ usageThreshold: 60
+ scalingThreshold: 50
+```
+
+Here, we are going to describe the various sections of a `DruidAutoscaler` crd.
+
+A `DruidAutoscaler` object has the following fields in the `spec` section.
+
+### spec.databaseRef
+
+`spec.databaseRef` is a required field that point to the [Druid](/docs/guides/druid/concepts/druid.md) object for which the autoscaling will be performed. This field consists of the following sub-field:
+
+- **spec.databaseRef.name :** specifies the name of the [Druid](/docs/guides/druid/concepts/druid.md) object.
+
+### spec.opsRequestOptions
+These are the options to pass in the internally created opsRequest CRO. `opsRequestOptions` has two fields.
+
+### spec.compute
+
+`spec.compute` specifies the autoscaling configuration for the compute resources i.e. cpu and memory of the database components. This field consists of the following sub-field:
+
+- `spec.compute.coordinators` indicates the desired compute autoscaling configuration for coordinators of a topology Druid database.
+- `spec.compute.overlords` indicates the desired compute autoscaling configuration for overlords of a topology Druid database.
+- `spec.compute.brokers` indicates the desired compute autoscaling configuration for brokers of a topology Druid database.
+- `spec.compute.routers` indicates the desired compute autoscaling configuration for routers of a topology Druid database.
+- `spec.compute.historicals` indicates the desired compute autoscaling configuration for historicals of a topology Druid database.
+- `spec.compute.middleManagers` indicates the desired compute autoscaling configuration for middleManagers of a topology Druid database.
+
+
+All of them has the following sub-fields:
+
+- `trigger` indicates if compute autoscaling is enabled for this component of the database. If "On" then compute autoscaling is enabled. If "Off" then compute autoscaling is disabled.
+- `minAllowed` specifies the minimal amount of resources that will be recommended, default is no minimum.
+- `maxAllowed` specifies the maximum amount of resources that will be recommended, default is no maximum.
+- `controlledResources` specifies which type of compute resources (cpu and memory) are allowed for autoscaling. Allowed values are "cpu" and "memory".
+- `containerControlledValues` specifies which resource values should be controlled. Allowed values are "RequestsAndLimits" and "RequestsOnly".
+- `resourceDiffPercentage` specifies the minimum resource difference between recommended value and the current value in percentage. If the difference percentage is greater than this value than autoscaling will be triggered.
+- `podLifeTimeThreshold` specifies the minimum pod lifetime of at least one of the pods before triggering autoscaling.
+
+There are two more fields, those are only specifiable for the percona variant inMemory databases.
+- `inMemoryStorage.UsageThresholdPercentage` If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased.
+- `inMemoryStorage.ScalingFactorPercentage` If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased by this given scaling percentage.
+
+### spec.storage
+
+`spec.storage` specifies the autoscaling configuration for the storage resources of the database components. This field consists of the following sub-field:
+
+- `spec.storage.historicals` indicates the desired storage autoscaling configuration for historicals of a topology Druid cluster.
+- `spec.storage.middleManagers` indicates the desired storage autoscaling configuration for middleManagers of a topology Druid cluster.
+
+> `spec.storage` is only supported for druid data nodes i.e. `historicals` and `middleManagers` as they are the only nodes containing volumes.
+
+All of them has the following sub-fields:
+
+- `trigger` indicates if storage autoscaling is enabled for this component of the database. If "On" then storage autoscaling is enabled. If "Off" then storage autoscaling is disabled.
+- `usageThreshold` indicates usage percentage threshold, if the current storage usage exceeds then storage autoscaling will be triggered.
+- `scalingThreshold` indicates the percentage of the current storage that will be scaled.
+- `expansionMode` indicates the volume expansion mode.
diff --git a/docs/guides/druid/concepts/druidopsrequest.md b/docs/guides/druid/concepts/druidopsrequest.md
new file mode 100644
index 0000000000..6a846814f2
--- /dev/null
+++ b/docs/guides/druid/concepts/druidopsrequest.md
@@ -0,0 +1,473 @@
+---
+title: DruidOpsRequest CRD
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-concepts-druidopsrequest
+ name: DruidOpsRequest
+ parent: guides-druid-concepts
+ weight: 40
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# DruidOpsRequest
+
+## What is DruidOpsRequest
+
+`DruidOpsRequest` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for [Druid](https://druid.apache.org/) administrative operations like database version updating, horizontal scaling, vertical scaling etc. in a Kubernetes native way.
+
+## DruidOpsRequest CRD Specifications
+
+Like any official Kubernetes resource, a `DruidOpsRequest` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections.
+
+Here, some sample `DruidOpsRequest` CRs for different administrative operations is given below:
+
+**Sample `DruidOpsRequest` for updating database:**
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: update-version
+ namespace: demo
+spec:
+ type: UpdateVersion
+ databaseRef:
+ name: druid-prod
+ updateVersion:
+ targetVersion: 30.0.1
+status:
+ conditions:
+ - lastTransitionTime: "2024-07-25T18:22:38Z"
+ message: Successfully completed the modification process
+ observedGeneration: 1
+ reason: Successful
+ status: "True"
+ type: Successful
+ observedGeneration: 1
+ phase: Successful
+```
+
+**Sample `DruidOpsRequest` Objects for Horizontal Scaling of different component of the database:**
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: drops-hscale-down
+ namespace: demo
+spec:
+ type: HorizontalScaling
+ databaseRef:
+ name: druid-prod
+ horizontalScaling:
+ topology:
+ coordinators: 2
+ historicals: 2
+status:
+ conditions:
+ - lastTransitionTime: "2024-07-25T18:22:38Z"
+ message: Successfully completed the modification process
+ observedGeneration: 1
+ reason: Successful
+ status: "True"
+ type: Successful
+ observedGeneration: 1
+ phase: Successful
+```
+
+**Sample `DruidOpsRequest` Objects for Vertical Scaling of different component of the database:**
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: drops-vscale
+ namespace: demo
+spec:
+ type: VerticalScaling
+ databaseRef:
+ name: druid-prod
+ verticalScaling:
+ coordinators:
+ resources:
+ requests:
+ memory: "1.5Gi"
+ cpu: "0.7"
+ limits:
+ memory: "2Gi"
+ cpu: "1"
+ historicals:
+ resources:
+ requests:
+ memory: "1.5Gi"
+ cpu: "0.7"
+ limits:
+ memory: "2Gi"
+ cpu: "1"
+status:
+ conditions:
+ - lastTransitionTime: "2024-07-25T18:22:38Z"
+ message: Successfully completed the modification process
+ observedGeneration: 1
+ reason: Successful
+ status: "True"
+ type: Successful
+ observedGeneration: 1
+ phase: Successful
+```
+
+**Sample `DruidOpsRequest` Objects for Reconfiguring different druid mode:**
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: drops-reconfiugre
+ namespace: demo
+spec:
+ type: Reconfigure
+ databaseRef:
+ name: druid-prod
+ configuration:
+ applyConfig:
+ middleManager.properties: |
+ druid.worker.capacity=5
+status:
+ conditions:
+ - lastTransitionTime: "2024-07-25T18:22:38Z"
+ message: Successfully completed the modification process
+ observedGeneration: 1
+ reason: Successful
+ status: "True"
+ type: Successful
+ observedGeneration: 1
+ phase: Successful
+```
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: drops-reconfiugre
+ namespace: demo
+spec:
+ type: Reconfigure
+ databaseRef:
+ name: druid-prod
+ configuration:
+ configSecret:
+ name: new-configsecret
+status:
+ conditions:
+ - lastTransitionTime: "2024-07-25T18:22:38Z"
+ message: Successfully completed the modification process
+ observedGeneration: 1
+ reason: Successful
+ status: "True"
+ type: Successful
+ observedGeneration: 1
+ phase: Successful
+```
+
+**Sample `DruidOpsRequest` Objects for Volume Expansion of different database components:**
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: drops-volume-exp
+ namespace: demo
+spec:
+ type: VolumeExpansion
+ databaseRef:
+ name: druid-prod
+ volumeExpansion:
+ mode: "Online"
+ historicals: 2Gi
+ middleManagers: 2Gi
+status:
+ conditions:
+ - lastTransitionTime: "2024-07-25T18:22:38Z"
+ message: Successfully completed the modification process
+ observedGeneration: 1
+ reason: Successful
+ status: "True"
+ type: Successful
+ observedGeneration: 1
+ phase: Successful
+```
+
+**Sample `DruidOpsRequest` Objects for Reconfiguring TLS of the database:**
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: drops-add-tls
+ namespace: demo
+spec:
+ type: ReconfigureTLS
+ databaseRef:
+ name: druid-prod
+ tls:
+ issuerRef:
+ name: dr-issuer
+ kind: Issuer
+ apiGroup: "cert-manager.io"
+ certificates:
+ - alias: client
+ emailAddresses:
+ - abc@appscode.com
+```
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: drops-rotate
+ namespace: demo
+spec:
+ type: ReconfigureTLS
+ databaseRef:
+ name: druid-dev
+ tls:
+ rotateCertificates: true
+```
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: drops-change-issuer
+ namespace: demo
+spec:
+ type: ReconfigureTLS
+ databaseRef:
+ name: druid-prod
+ tls:
+ issuerRef:
+ name: dr-new-issuer
+ kind: Issuer
+ apiGroup: "cert-manager.io"
+```
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: drops-remove
+ namespace: demo
+spec:
+ type: ReconfigureTLS
+ databaseRef:
+ name: druid-prod
+ tls:
+ remove: true
+```
+
+Here, we are going to describe the various sections of a `DruidOpsRequest` crd.
+
+A `DruidOpsRequest` object has the following fields in the `spec` section.
+
+### spec.databaseRef
+
+`spec.databaseRef` is a required field that point to the [Druid](/docs/guides/druid/concepts/druid.md) object for which the administrative operations will be performed. This field consists of the following sub-field:
+
+- **spec.databaseRef.name :** specifies the name of the [Druid](/docs/guides/druid/concepts/druid.md) object.
+
+### spec.type
+
+`spec.type` specifies the kind of operation that will be applied to the database. Currently, the following types of operations are allowed in `DruidOpsRequest`.
+
+- `UpdateVersion`
+- `HorizontalScaling`
+- `VerticalScaling`
+- `VolumeExpansion`
+- `Reconfigure`
+- `ReconfigureTLS`
+- `Restart`
+
+> You can perform only one type of operation on a single `DruidOpsRequest` CR. For example, if you want to update your database and scale up its replica then you have to create two separate `DruidOpsRequest`. At first, you have to create a `DruidOpsRequest` for updating. Once it is completed, then you can create another `DruidOpsRequest` for scaling.
+
+### spec.updateVersion
+
+If you want to update you Druid version, you have to specify the `spec.updateVersion` section that specifies the desired version information. This field consists of the following sub-field:
+
+- `spec.updateVersion.targetVersion` refers to a [DruidVersion](/docs/guides/druid/concepts/druidversion.md) CR that contains the Druid version information where you want to update.
+
+> You can only update between Druid versions. KubeDB does not support downgrade for Druid.
+
+### spec.horizontalScaling
+
+If you want to scale-up or scale-down your Druid cluster or different components of it, you have to specify `spec.horizontalScaling` section. This field consists of the following sub-field:
+
+- `spec.horizontalScaling.topology` indicates the configuration of topology nodes for Druid topology cluster after scaling. This field consists of the following sub-field:
+ - `spec.horizontalScaling.topoloy.coordinators` indicates the desired number of coordinators nodes for Druid topology cluster after scaling.
+ - `spec.horizontalScaling.topology.overlords` indicates the desired number of overlords nodes for Druid topology cluster after scaling.
+ - `spec.horizontalScaling.topology.brokers` indicates the desired number of brokers nodes for Druid topology cluster after scaling.
+ - `spec.horizontalScaling.topology.routers` indicates the desired number of routers nodes for Druid topology cluster after scaling.
+ - `spec.horizontalScaling.topology.historicals` indicates the desired number of historicals nodes for Druid topology cluster after scaling.
+ - `spec.horizontalScaling.topology.middleManagers` indicates the desired number of middleManagers nodes for Druid topology cluster after scaling.
+
+### spec.verticalScaling
+
+`spec.verticalScaling` is a required field specifying the information of `Druid` resources like `cpu`, `memory` etc that will be scaled. This field consists of the following sub-fields:
+- `spec.verticalScaling.coordinators` indicates the desired resources for coordinators of Druid topology cluster after scaling.
+- `spec.verticalScaling.overlords` indicates the desired resources for overlords of Druid topology cluster after scaling.
+- `spec.verticalScaling.brokers` indicates the desired resources for brokers of Druid topology cluster after scaling.
+- `spec.verticalScaling.routers` indicates the desired resources for routers of Druid topology cluster after scaling.
+- `spec.verticalScaling.historicals` indicates the desired resources for historicals of Druid topology cluster after scaling.
+- `spec.verticalScaling.middleManagers` indicates the desired resources for middleManagers of Druid topology cluster after scaling.
+
+All of them has the below structure:
+
+```yaml
+requests:
+ memory: "200Mi"
+ cpu: "0.1"
+limits:
+ memory: "300Mi"
+ cpu: "0.2"
+```
+
+Here, when you specify the resource request, the scheduler uses this information to decide which node to place the container of the Pod on and when you specify a resource limit for the container, the `kubelet` enforces those limits so that the running container is not allowed to use more of that resource than the limit you set. You can found more details from [here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
+
+### spec.volumeExpansion
+
+> To use the volume expansion feature the storage class must support volume expansion
+
+If you want to expand the volume of your Druid cluster or different components of it, you have to specify `spec.volumeExpansion` section. This field consists of the following sub-field:
+
+- `spec.mode` specifies the volume expansion mode. Supported values are `Online` & `Offline`. The default is `Online`.
+- `spec.volumeExpansion.historicals` indicates the desired size for the persistent volume for historicals of a Druid topology cluster.
+- `spec.volumeExpansion.middleManagers` indicates the desired size for the persistent volume for middleManagers of a Druid topology cluster.
+
+> It is only possible to expand the data servers ie. `historicals` and `middleManagers` as they only comes with persistent volumes.
+
+All of them refer to [Quantity](https://v1-22.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#quantity-resource-core) types of Kubernetes.
+
+Example usage of this field is given below:
+
+```yaml
+spec:
+ volumeExpansion:
+ node: "2Gi"
+```
+
+This will expand the volume size of all the combined nodes to 2 GB.
+
+### spec.configuration
+
+If you want to reconfigure your Running Druid cluster or different components of it with new custom configuration, you have to specify `spec.configuration` section. This field consists of the following sub-field:
+
+- `spec.configuration.configSecret` points to a secret in the same namespace of a Druid resource, which contains the new custom configurations. If there are any configSecret set before in the database, this secret will replace it. The value of the field `spec.stringData` of the secret like below:
+```yaml
+common.runtime.properties: |
+ druid.storage.archiveBucket="my-druid-archive-bucket"
+middleManagers.properties: |
+ druid.worker.capacity=5
+```
+> Similarly, it is possible to provide configs for `coordinators`, `overlords`, `brokers`, `routers` and `historicals` through `coordinators.properties`, `overlords.properties`, `brokers.properties`, `routers.properties` and `historicals.properties` respectively.
+
+- `applyConfig` contains the new custom config as a string which will be merged with the previous configuration.
+
+- `applyConfig` is a map where key supports 3 values, namely `server.properties`, `broker.properties`, `controller.properties`. And value represents the corresponding configurations.
+
+```yaml
+ applyConfig:
+ common.runtime.properties: |
+ druid.storage.archiveBucket="my-druid-archive-bucket"
+ middleManagers.properties: |
+ druid.worker.capacity=5
+```
+
+- `removeCustomConfig` is a boolean field. Specify this field to true if you want to remove all the custom configuration from the deployed druid cluster.
+
+### spec.tls
+
+If you want to reconfigure the TLS configuration of your Druid i.e. add TLS, remove TLS, update issuer/cluster issuer or Certificates and rotate the certificates, you have to specify `spec.tls` section. This field consists of the following sub-field:
+
+- `spec.tls.issuerRef` specifies the issuer name, kind and api group.
+- `spec.tls.certificates` specifies the certificates. You can learn more about this field from [here](/docs/guides/druid/concepts/druid.md#spectls).
+- `spec.tls.rotateCertificates` specifies that we want to rotate the certificate of this druid.
+- `spec.tls.remove` specifies that we want to remove tls from this druid.
+
+### spec.timeout
+As we internally retry the ops request steps multiple times, This `timeout` field helps the users to specify the timeout for those steps of the ops request (in second).
+If a step doesn't finish within the specified timeout, the ops request will result in failure.
+
+### spec.apply
+This field controls the execution of obsRequest depending on the database state. It has two supported values: `Always` & `IfReady`.
+Use IfReady, if you want to process the opsRequest only when the database is Ready. And use Always, if you want to process the execution of opsReq irrespective of the Database state.
+
+### DruidOpsRequest `Status`
+
+`.status` describes the current state and progress of a `DruidOpsRequest` operation. It has the following fields:
+
+### status.phase
+
+`status.phase` indicates the overall phase of the operation for this `DruidOpsRequest`. It can have the following three values:
+
+| Phase | Meaning |
+|-------------|----------------------------------------------------------------------------------|
+| Successful | KubeDB has successfully performed the operation requested in the DruidOpsRequest |
+| Progressing | KubeDB has started the execution of the applied DruidOpsRequest |
+| Failed | KubeDB has failed the operation requested in the DruidOpsRequest |
+| Denied | KubeDB has denied the operation requested in the DruidOpsRequest |
+| Skipped | KubeDB has skipped the operation requested in the DruidOpsRequest |
+
+Important: Ops-manager Operator can skip an opsRequest, only if its execution has not been started yet & there is a newer opsRequest applied in the cluster. `spec.type` has to be same as the skipped one, in this case.
+
+### status.observedGeneration
+
+`status.observedGeneration` shows the most recent generation observed by the `DruidOpsRequest` controller.
+
+### status.conditions
+
+`status.conditions` is an array that specifies the conditions of different steps of `DruidOpsRequest` processing. Each condition entry has the following fields:
+
+- `types` specifies the type of the condition. DruidOpsRequest has the following types of conditions:
+
+| Type | Meaning |
+|-------------------------------|---------------------------------------------------------------------------|
+| `Progressing` | Specifies that the operation is now in the progressing state |
+| `Successful` | Specifies such a state that the operation on the database was successful. |
+| `HaltDatabase` | Specifies such a state that the database is halted by the operator |
+| `ResumeDatabase` | Specifies such a state that the database is resumed by the operator |
+| `Failed` | Specifies such a state that the operation on the database failed. |
+| `StartingBalancer` | Specifies such a state that the balancer has successfully started |
+| `StoppingBalancer` | Specifies such a state that the balancer has successfully stopped |
+| `UpdateShardImage` | Specifies such a state that the Shard Images has been updated |
+| `UpdateReplicaSetImage` | Specifies such a state that the Replicaset Image has been updated |
+| `UpdateConfigServerImage` | Specifies such a state that the ConfigServer Image has been updated |
+| `UpdateMongosImage` | Specifies such a state that the Mongos Image has been updated |
+| `UpdatePetSetResources` | Specifies such a state that the Petset resources has been updated |
+| `UpdateShardResources` | Specifies such a state that the Shard resources has been updated |
+| `UpdateReplicaSetResources` | Specifies such a state that the Replicaset resources has been updated |
+| `UpdateConfigServerResources` | Specifies such a state that the ConfigServer resources has been updated |
+| `UpdateMongosResources` | Specifies such a state that the Mongos resources has been updated |
+| `ScaleDownReplicaSet` | Specifies such a state that the scale down operation of replicaset |
+| `ScaleUpReplicaSet` | Specifies such a state that the scale up operation of replicaset |
+| `ScaleUpShardReplicas` | Specifies such a state that the scale up operation of shard replicas |
+| `ScaleDownShardReplicas` | Specifies such a state that the scale down operation of shard replicas |
+| `ScaleDownConfigServer` | Specifies such a state that the scale down operation of config server |
+| `ScaleUpConfigServer` | Specifies such a state that the scale up operation of config server |
+| `ScaleMongos` | Specifies such a state that the scale down operation of replicaset |
+| `VolumeExpansion` | Specifies such a state that the volume expansion operaton of the database |
+| `ReconfigureReplicaset` | Specifies such a state that the reconfiguration of replicaset nodes |
+| `ReconfigureMongos` | Specifies such a state that the reconfiguration of mongos nodes |
+| `ReconfigureShard` | Specifies such a state that the reconfiguration of shard nodes |
+| `ReconfigureConfigServer` | Specifies such a state that the reconfiguration of config server nodes |
+
+- The `status` field is a string, with possible values `True`, `False`, and `Unknown`.
+ - `status` will be `True` if the current transition succeeded.
+ - `status` will be `False` if the current transition failed.
+ - `status` will be `Unknown` if the current transition was denied.
+- The `message` field is a human-readable message indicating details about the condition.
+- The `reason` field is a unique, one-word, CamelCase reason for the condition's last transition.
+- The `lastTransitionTime` field provides a timestamp for when the operation last transitioned from one state to another.
+- The `observedGeneration` shows the most recent condition transition generation observed by the controller.
diff --git a/docs/guides/druid/concepts/druidversion.md b/docs/guides/druid/concepts/druidversion.md
new file mode 100644
index 0000000000..74bd7e76f2
--- /dev/null
+++ b/docs/guides/druid/concepts/druidversion.md
@@ -0,0 +1,102 @@
+---
+title: DruidVersion CRD
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-concepts-druidversion
+ name: DruidVersion
+ parent: guides-druid-concepts
+ weight: 30
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# DruidVersion
+
+## What is DruidVersion
+
+`DruidVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for [Druid](https://druid.apache.org) database deployed with KubeDB in a Kubernetes native way.
+
+When you install KubeDB, a `DruidVersion` custom resource will be created automatically for every supported Druid versions. You have to specify the name of `DruidVersion` CR in `spec.version` field of [Druid](/docs/guides/druid/concepts/druid.md) crd. Then, KubeDB will use the docker images specified in the `DruidVersion` CR to create your expected database.
+
+Using a separate CRD for specifying respective docker images, and pod security policy names allow us to modify the images, and policies independent of KubeDB operator.This will also allow the users to use a custom image for the database.
+
+## DruidVersion Spec
+
+As with all other Kubernetes objects, a DruidVersion needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section.
+
+```yaml
+apiVersion: catalog.kubedb.com/v1alpha1
+kind: DruidVersion
+metadata:
+ annotations:
+ meta.helm.sh/release-name: kubedb
+ meta.helm.sh/release-namespace: kubedb
+ creationTimestamp: "2024-10-16T13:10:10Z"
+ generation: 1
+ labels:
+ app.kubernetes.io/instance: kubedb
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/name: kubedb-catalog
+ app.kubernetes.io/version: v2024.9.30
+ helm.sh/chart: kubedb-catalog-v2024.9.30
+ name: 28.0.1
+ resourceVersion: "42125"
+ uid: e30e23aa-febc-4029-8be7-993afaff1fc6
+spec:
+ db:
+ image: ghcr.io/appscode-images/druid:28.0.1
+ initContainer:
+ image: ghcr.io/kubedb/druid-init:28.0.1
+ securityContext:
+ runAsUser: 1000
+ version: 28.0.1
+```
+
+### metadata.name
+
+`metadata.name` is a required field that specifies the name of the `DruidVersion` CR. You have to specify this name in `spec.version` field of [Druid](/docs/guides/druid/concepts/druid.md) CR.
+
+We follow this convention for naming DruidVersion CR:
+
+- Name format: `{Original Druid image version}-{modification tag}`
+
+We use official Apache Druid release tar files to build docker images for supporting Druid versions and re-tag the image with v1, v2 etc. modification tag when there's any. An image with higher modification tag will have more features than the images with lower modification tag. Hence, it is recommended to use DruidVersion CR with the highest modification tag to enjoy the latest features.
+
+### spec.version
+
+`spec.version` is a required field that specifies the original version of Druid database that has been used to build the docker image specified in `spec.db.image` field.
+
+### spec.deprecated
+
+`spec.deprecated` is an optional field that specifies whether the docker images specified here is supported by the current KubeDB operator.
+
+The default value of this field is `false`. If `spec.deprecated` is set to `true`, KubeDB operator will skip processing this CRD object and will add a event to the CRD object specifying that the DB version is deprecated.
+
+### spec.db.image
+
+`spec.db.image` is a required field that specifies the docker image which will be used to create PetSet by KubeDB operator to create expected Druid database.
+
+### spec.initContainer.image
+
+`spec.initContainer.image` is a required field that specifies the image which will be used to remove `lost+found` directory and mount an `EmptyDir` data volume.
+
+### spec.podSecurityPolicies.databasePolicyName
+
+`spec.podSecurityPolicies.databasePolicyName` is a required field that specifies the name of the pod security policy required to get the database server pod(s) running.
+
+```bash
+helm upgrade -i kubedb oci://ghcr.io/appscode-charts/kubedb \
+ --namespace kubedb --create-namespace \
+ --set global.featureGates.Druid=true --set global.featureGates.ZooKeeper=true \
+ --set additionalPodSecurityPolicies[0]=custom-db-policy \
+ --set additionalPodSecurityPolicies[1]=custom-snapshotter-policy \
+ --set-file global.license=/path/to/the/license.txt \
+ --wait --burst-limit=10000 --debug
+```
+
+## Next Steps
+
+- Learn about Druid CRD [here](/docs/guides/druid/concepts/druid.md).
+- Deploy your first Druid database with KubeDB by following the guide [here](/docs/guides/druid/quickstart/guide/index.md).
diff --git a/docs/guides/druid/configuration/_index.md b/docs/guides/druid/configuration/_index.md
new file mode 100644
index 0000000000..4aaebde0ed
--- /dev/null
+++ b/docs/guides/druid/configuration/_index.md
@@ -0,0 +1,10 @@
+---
+title: Run Druid with Custom Configuration
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-configuration
+ name: Custom Configuration
+ parent: guides-druid
+ weight: 40
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/druid/configuration/config-file/images/druid-updated-ui.png b/docs/guides/druid/configuration/config-file/images/druid-updated-ui.png
new file mode 100644
index 0000000000..4e53f6bb00
Binary files /dev/null and b/docs/guides/druid/configuration/config-file/images/druid-updated-ui.png differ
diff --git a/docs/guides/druid/configuration/config-file/index.md b/docs/guides/druid/configuration/config-file/index.md
new file mode 100644
index 0000000000..dca8653758
--- /dev/null
+++ b/docs/guides/druid/configuration/config-file/index.md
@@ -0,0 +1,284 @@
+---
+title: Configuring Druid Cluster
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-configuration-config-file
+ name: Configuration File
+ parent: guides-druid-configuration
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Configure Druid Cluster
+
+In Druid cluster, there are six nodes available coordinators, overlords, brokers, routers, historicals, middleManagers. In this tutorial, we will see how to configure each node of a druid cluster.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md) and make sure to include the flags `--set global.featureGates.Druid=true` to ensure **Druid CRD** and `--set global.featureGates.ZooKeeper=true` to ensure **ZooKeeper CRD** as Druid depends on ZooKeeper for external dependency with helm command.
+
+- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial.
+
+```bash
+$ kubectl create namespace demo
+namespace/demo created
+
+$ kubectl get namespace
+NAME STATUS AGE
+demo Active 9s
+```
+
+> Note: YAML files used in this tutorial are stored in [here](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/druid/configuration/yamls) in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
+
+## Find Available StorageClass
+
+We will have to provide `StorageClass` in Druid CR specification. Check available `StorageClass` in your cluster using the following command,
+
+```bash
+$ kubectl get storageclass
+NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
+standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 1h
+```
+
+Here, we have `standard` StorageClass in our cluster from [Local Path Provisioner](https://github.com/rancher/local-path-provisioner).
+
+Before deploying `Druid` cluster, we need to prepare the external dependencies.
+
+## Create External Dependency (Deep Storage)
+
+Before proceeding further, we need to prepare deep storage, which is one of the external dependency of Druid and used for storing the segments. It is a storage mechanism that Apache Druid does not provide. **Amazon S3**, **Google Cloud Storage**, or **Azure Blob Storage**, **S3-compatible storage** (like **Minio**), or **HDFS** are generally convenient options for deep storage.
+
+In this tutorial, we will run a `minio-server` as deep storage in our local `kind` cluster using `minio-operator` and create a bucket named `druid` in it, which the deployed druid database will use.
+
+```bash
+
+$ helm repo add minio https://operator.min.io/
+$ helm repo update minio
+$ helm upgrade --install --namespace "minio-operator" --create-namespace "minio-operator" minio/operator --set operator.replicaCount=1
+
+$ helm upgrade --install --namespace "demo" --create-namespace druid-minio minio/tenant \
+--set tenant.pools[0].servers=1 \
+--set tenant.pools[0].volumesPerServer=1 \
+--set tenant.pools[0].size=1Gi \
+--set tenant.certificate.requestAutoCert=false \
+--set tenant.buckets[0].name="druid" \
+--set tenant.pools[0].name="default"
+
+```
+
+Now we need to create a `Secret` named `deep-storage-config`. It contains the necessary connection information using which the druid database will connect to the deep storage.
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+```
+
+Let’s create the `deep-storage-config` Secret shown above:
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/backup/application-level/examples/deep-storage-config.yaml
+secret/deep-storage-config created
+```
+
+## Use Custom Configuration
+
+Say we want to change the default maximum number of tasks the MiddleManager can accept. Let's create the `middleManagers.properties` file with our desire configurations.
+
+**middleManagers.properties:**
+
+```properties
+druid.worker.capacity=5
+```
+
+and we also want to change the number of processing threads to have available for parallel processing of segments of the historicals nodes. Let's create the `historicals.properties` file with our desire configurations.
+
+**historicals.properties:**
+
+```properties
+druid.processing.numThreads=3
+```
+
+Let's create a k8s secret containing the above configuration where the file name will be the key and the file-content as the value:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: configsecret
+ namespace: demo
+stringData:
+ middleManagers.properties: |-
+ druid.worker.capacity=5
+ historicals.properties: |-
+ druid.processing.numThreads=3
+```
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/configuration/config-file/yamls/config-secret.yaml
+secret/config-secret created
+```
+
+> To provide custom configuration for other nodes add values for the following `key` under `stringData`:
+> - Use `common.runtime.properties` for common configurations
+> - Use `coordinators.properties` for configurations of coordinators
+> - Use `overlords.properties` for configurations of overlords
+> - Use `brokers.properties` for configurations of brokers
+> - Use `routers.properties` for configurations of routers
+
+Now that the config secret is created, it needs to be mentioned in the [Druid](/docs/guides/druid/concepts/druid.md) object's yaml:
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-with-config
+ namespace: demo
+spec:
+ version: 28.0.1
+ configSecret:
+ name: config-secret
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ deletionPolicy: WipeOut
+```
+
+Now, create the Druid object by the following command:
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/configuration/config-file/yamls/druid-with-monitoring.yaml
+druid.kubedb.com/druid-with-config created
+```
+
+Now, wait for the Druid to become ready:
+
+```bash
+$ kubectl get dr -n demo -w
+NAME TYPE VERSION STATUS AGE
+druid-with-config kubedb.com/v1 3.6.1 Provisioning 5s
+druid-with-config kubedb.com/v1 3.6.1 Provisioning 7s
+.
+.
+druid-with-config kubedb.com/v1 3.6.1 Ready 2m
+```
+
+## Verify Configuration
+
+Lets exec into one of the druid middleManagers pod that we have created and check the configurations are applied or not:
+
+Exec into the Druid middleManagers:
+
+```bash
+$ kubectl exec -it -n demo druid-with-config-middleManagers-0 -- bash
+Defaulted container "druid" out of: druid, init-druid (init)
+bash-5.1$
+```
+
+Now, execute the following commands to see the configurations:
+```bash
+bash-5.1$ cat conf/druid/cluster/data/middleManager/runtime.properties | grep druid.worker.capacity
+druid.worker.capacity=5
+```
+Here, we can see that our given configuration is applied to the Druid cluster for all brokers.
+
+Now, lets exec into one of the druid historicals pod that we have created and check the configurations are applied or not:
+
+Exec into the Druid historicals:
+
+```bash
+$ kubectl exec -it -n demo druid-with-config-historicals-0 -- bash
+Defaulted container "druid" out of: druid, init-druid (init)
+bash-5.1$
+```
+
+Now, execute the following commands to see the metadata storage directory:
+```bash
+bash-5.1$ cat conf/druid/cluster/data/historical/runtime.properties | grep druid.processing.numThreads
+druid.processing.numThreads=3
+```
+
+Here, we can see that our given configuration is applied to the historicals.
+
+### Verify Configuration Change from Druid UI
+You can also see the configuration changes from the druid ui. For that, follow the following steps:
+
+First port-forward the port `8888` to local machine:
+
+```bash
+$ kubectl port-forward -n demo svc/druid-with-config-routers 8888
+Forwarding from 127.0.0.1:8888 -> 8888
+Forwarding from [::1]:8888 -> 8888
+```
+
+
+Now hit the `http://localhost:8888` from any browser, and you will be prompted to provide the credential of the druid database. By following the steps discussed below, you can get the credential generated by the KubeDB operator for your Druid database.
+
+**Connection information:**
+
+- Username:
+
+ ```bash
+ $ kubectl get secret -n demo druid-with-config-admin-cred -o jsonpath='{.data.username}' | base64 -d
+ admin
+ ```
+
+- Password:
+
+ ```bash
+ $ kubectl get secret -n demo druid-with-config-admin-cred -o jsonpath='{.data.password}' | base64 -d
+ LzJtVRX5E8MorFaf
+ ```
+
+After providing the credentials correctly, you should be able to access the web console like shown below.
+
+
+
+
+
+
+You can see that there are 5 task slots reflecting with our provided custom configuration of `druid.worker.capacity=5`.
+
+## Cleanup
+
+To cleanup the Kubernetes resources created by this tutorial, run:
+
+```bash
+$ kubectl delete dr -n demo druid-dev
+
+$ kubectl delete secret -n demo configsecret-combined
+
+$ kubectl delete namespace demo
+```
+
+## Next Steps
+
+- Detail concepts of [Druid object](/docs/guides/druid/concepts/druid.md).
+- Different Druid topology clustering modes [here](/docs/guides/druid/clustering/_index.md).
+
+[//]: # (- Monitor your Druid database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/druid/monitoring/using-prometheus-operator.md).)
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
+
diff --git a/docs/guides/druid/configuration/config-file/yamls/config-secret.yaml b/docs/guides/druid/configuration/config-file/yamls/config-secret.yaml
new file mode 100644
index 0000000000..6067ee7dd2
--- /dev/null
+++ b/docs/guides/druid/configuration/config-file/yamls/config-secret.yaml
@@ -0,0 +1,10 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: new-config
+ namespace: demo
+stringData:
+ middleManagers.properties: |-
+ druid.worker.capacity=5
+ historicals.properties: |-
+ druid.processing.numThreads=3
diff --git a/docs/guides/druid/configuration/config-file/yamls/deep-storage-config.yaml b/docs/guides/druid/configuration/config-file/yamls/deep-storage-config.yaml
new file mode 100644
index 0000000000..3612595828
--- /dev/null
+++ b/docs/guides/druid/configuration/config-file/yamls/deep-storage-config.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+
diff --git a/docs/guides/druid/configuration/config-file/yamls/druid-with-config.yaml b/docs/guides/druid/configuration/config-file/yamls/druid-with-config.yaml
new file mode 100644
index 0000000000..b2225f22b1
--- /dev/null
+++ b/docs/guides/druid/configuration/config-file/yamls/druid-with-config.yaml
@@ -0,0 +1,17 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-with-config
+ namespace: demo
+spec:
+ version: 28.0.1
+ configSecret:
+ name: config-secret
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ deletionPolicy: WipeOut
diff --git a/docs/guides/druid/configuration/podtemplating/index.md b/docs/guides/druid/configuration/podtemplating/index.md
new file mode 100644
index 0000000000..89c4ae2c68
--- /dev/null
+++ b/docs/guides/druid/configuration/podtemplating/index.md
@@ -0,0 +1,618 @@
+---
+title: Run Druid with Custom PodTemplate
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-configuration-podtemplating
+ name: Customize PodTemplate
+ parent: guides-druid-configuration
+ weight: 15
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Run Druid with Custom PodTemplate
+
+KubeDB supports providing custom configuration for Druid via [PodTemplate](/docs/guides/druid/concepts/druid.md#spec.topology). This tutorial will show you how to use KubeDB to run a Druid database with custom configuration using PodTemplate.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial.
+
+ ```bash
+ $ kubectl create ns demo
+ namespace/demo created
+ ```
+
+> Note: YAML files used in this tutorial are stored in [docs/guides/druid/configuration/podtemplating/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/druid/configuration/podtemplating/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
+
+## Overview
+
+KubeDB allows providing a template for `leaf` and `aggregator` pod through `spec.topology.aggregator.podTemplate` and `spec.topology.leaf.podTemplate`. KubeDB operator will pass the information provided in `spec.topology.aggregator.podTemplate` and `spec.topology.leaf.podTemplate` to the `aggregator` and `leaf` PetSet created for Druid database.
+KubeDB allows providing a template for all the druid pods through `spec.topology..podTemplate`. KubeDB operator will pass the information provided in `spec.topology..podTemplate` to the corresponding PetSet created for Druid database.
+
+KubeDB accept following fields to set in `spec.podTemplate:`
+
+- metadata:
+ - annotations (pod's annotation)
+ - labels (pod's labels)
+- controller:
+ - annotations (statefulset's annotation)
+ - labels (statefulset's labels)
+- spec:
+ - volumes
+ - initContainers
+ - containers
+ - imagePullSecrets
+ - nodeSelector
+ - affinity
+ - serviceAccountName
+ - schedulerName
+ - tolerations
+ - priorityClassName
+ - priority
+ - securityContext
+ - livenessProbe
+ - readinessProbe
+ - lifecycle
+
+Read about the fields in details in [PodTemplate concept](/docs/guides/druid/concepts/druid.md#spectopology),
+
+
+## Create External Dependency (Deep Storage)
+
+Before proceeding further, we need to prepare deep storage, which is one of the external dependency of Druid and used for storing the segments. It is a storage mechanism that Apache Druid does not provide. **Amazon S3**, **Google Cloud Storage**, or **Azure Blob Storage**, **S3-compatible storage** (like **Minio**), or **HDFS** are generally convenient options for deep storage.
+
+In this tutorial, we will run a `minio-server` as deep storage in our local `kind` cluster using `minio-operator` and create a bucket named `druid` in it, which the deployed druid database will use.
+
+```bash
+
+$ helm repo add minio https://operator.min.io/
+$ helm repo update minio
+$ helm upgrade --install --namespace "minio-operator" --create-namespace "minio-operator" minio/operator --set operator.replicaCount=1
+
+$ helm upgrade --install --namespace "demo" --create-namespace druid-minio minio/tenant \
+--set tenant.pools[0].servers=1 \
+--set tenant.pools[0].volumesPerServer=1 \
+--set tenant.pools[0].size=1Gi \
+--set tenant.certificate.requestAutoCert=false \
+--set tenant.buckets[0].name="druid" \
+--set tenant.pools[0].name="default"
+
+```
+
+Now we need to create a `Secret` named `deep-storage-config`. It contains the necessary connection information using which the druid database will connect to the deep storage.
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+```
+
+Let’s create the `deep-storage-config` Secret shown above:
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/backup/application-level/examples/deep-storage-config.yaml
+secret/deep-storage-config created
+```
+
+## CRD Configuration
+
+Below is the YAML for the Druid created in this example. Here, [`spec.topology.aggregator/leaf.podTemplate.spec.args`](/docs/guides/mysql/concepts/database/index.md#specpodtemplatespecargs) provides extra arguments.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster
+ namespace: demo
+spec:
+ version: 28.0.1
+ configSecret:
+ name: config-secret
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ coordinators:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: druid
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ brokers:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: druid
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ routers:
+ replicas: 1
+ deletionPolicy: WipeOut
+```
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/configuration/podtemplating/yamls/druid-cluster.yaml
+druid.kubedb.com/druid-cluster created
+```
+
+Now, wait a few minutes. KubeDB operator will create necessary PVC, petset, services, secret etc. If everything goes well, we will see that `druid-cluster` is in `Ready` state.
+```bash
+$ kubectl get druid -n demo
+NAME TYPE VERSION STATUS AGE
+druid-cluster kubedb.com/v1alpha2 28.0.1 Ready 6m5s
+```
+
+Check that the petset's pod is running
+
+```bash
+$ kubectl get pods -n demo -l app.kubernetes.io/instance=druid-cluster
+NAME READY STATUS RESTARTS AGE
+druid-cluster-brokers-0 1/1 Running 0 7m2s
+druid-cluster-coordinators-0 1/1 Running 0 7m9s
+druid-cluster-historicals-0 1/1 Running 0 7m7s
+druid-cluster-middlemanagers-0 1/1 Running 0 7m5s
+druid-cluster-routers-0 1/1 Running 0 7m
+```
+
+Now, we will check if the database has started with the custom configuration we have provided.
+
+```bash
+$ kubectl get pod -n demo druid-cluster-coordinators-0 -o json | jq '.spec.containers[].resources'
+{
+ "limits": {
+ "cpu": "600m",
+ "memory": "2Gi"
+ },
+ "requests": {
+ "cpu": "600m",
+ "memory": "2Gi"
+ }
+}
+
+$ kubectl get pod -n demo druid-cluster-brokers-0 -o json | jq '.spec.containers[].resources'
+{
+ "limits": {
+ "cpu": "600m",
+ "memory": "2Gi"
+ },
+ "requests": {
+ "cpu": "600m",
+ "memory": "2Gi"
+ }
+}
+```
+
+Here we can see the containers of the both `coordinators` and `brokers` have the resources we have specified in the manifest.
+
+## Using Node Selector
+
+Here in this example we will use [node selector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) to schedule our druid pod to a specific node. Applying nodeSelector to the Pod involves several steps. We first need to assign a label to some node that will be later used by the `nodeSelector` . Let’s find what nodes exist in your cluster. To get the name of these nodes, you can run:
+
+```bash
+$ kubectl get nodes --show-labels
+NAME STATUS ROLES AGE VERSION LABELS
+lke212553-307295-339173d10000 Ready 36m v1.30.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=g6-dedicated-4,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=ap-south,kubernetes.io/arch=amd64,kubernetes.io/hostname=lke212553-307295-339173d10000,kubernetes.io/os=linux,lke.linode.com/pool-id=307295,node.k8s.linode.com/host-uuid=618158120a299c6fd37f00d01d355ca18794c467,node.kubernetes.io/instance-type=g6-dedicated-4,topology.kubernetes.io/region=ap-south,topology.linode.com/region=ap-south
+lke212553-307295-5541798e0000 Ready 36m v1.30.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=g6-dedicated-4,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=ap-south,kubernetes.io/arch=amd64,kubernetes.io/hostname=lke212553-307295-5541798e0000,kubernetes.io/os=linux,lke.linode.com/pool-id=307295,node.k8s.linode.com/host-uuid=75cfe3dbbb0380f1727efc53f5192897485e95d5,node.kubernetes.io/instance-type=g6-dedicated-4,topology.kubernetes.io/region=ap-south,topology.linode.com/region=ap-south
+lke212553-307295-5b53c5520000 Ready 36m v1.30.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=g6-dedicated-4,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=ap-south,kubernetes.io/arch=amd64,kubernetes.io/hostname=lke212553-307295-5b53c5520000,kubernetes.io/os=linux,lke.linode.com/pool-id=307295,node.k8s.linode.com/host-uuid=792bac078d7ce0e548163b9423416d7d8c88b08f,node.kubernetes.io/instance-type=g6-dedicated-4,topology.kubernetes.io/region=ap-south,topology.linode.com/region=ap-south
+```
+As you see, we have three nodes in the cluster: lke212553-307295-339173d10000, lke212553-307295-5541798e0000, and lke212553-307295-5b53c5520000.
+
+Next, select a node to which you want to add a label. For example, let’s say we want to add a new label with the key `disktype` and value ssd to the `lke212553-307295-5541798e0000` node, which is a node with the SSD storage. To do so, run:
+```bash
+$ kubectl label nodes lke212553-307295-5541798e0000 disktype=ssd
+node/lke212553-307295-5541798e0000 labeled
+```
+As you noticed, the command above follows the format `kubectl label nodes =` .
+Finally, let’s verify that the new label was added by running:
+```bash
+ $ kubectl get nodes --show-labels
+NAME STATUS ROLES AGE VERSION LABELS
+lke212553-307295-339173d10000 Ready 41m v1.30.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=g6-dedicated-4,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=ap-south,kubernetes.io/arch=amd64,kubernetes.io/hostname=lke212553-307295-339173d10000,kubernetes.io/os=linux,lke.linode.com/pool-id=307295,node.k8s.linode.com/host-uuid=618158120a299c6fd37f00d01d355ca18794c467,node.kubernetes.io/instance-type=g6-dedicated-4,topology.kubernetes.io/region=ap-south,topology.linode.com/region=ap-south
+lke212553-307295-5541798e0000 Ready 41m v1.30.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=g6-dedicated-4,beta.kubernetes.io/os=linux,disktype=ssd,failure-domain.beta.kubernetes.io/region=ap-south,kubernetes.io/arch=amd64,kubernetes.io/hostname=lke212553-307295-5541798e0000,kubernetes.io/os=linux,lke.linode.com/pool-id=307295,node.k8s.linode.com/host-uuid=75cfe3dbbb0380f1727efc53f5192897485e95d5,node.kubernetes.io/instance-type=g6-dedicated-4,topology.kubernetes.io/region=ap-south,topology.linode.com/region=ap-south
+lke212553-307295-5b53c5520000 Ready 41m v1.30.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=g6-dedicated-4,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=ap-south,kubernetes.io/arch=amd64,kubernetes.io/hostname=lke212553-307295-5b53c5520000,kubernetes.io/os=linux,lke.linode.com/pool-id=307295,node.k8s.linode.com/host-uuid=792bac078d7ce0e548163b9423416d7d8c88b08f,node.kubernetes.io/instance-type=g6-dedicated-4,topology.kubernetes.io/region=ap-south,topology.linode.com/region=ap-south
+```
+As you see, the lke212553-307295-5541798e0000 now has a new label disktype=ssd. To see all labels attached to the node, you can also run:
+```bash
+$ kubectl describe node "lke212553-307295-5541798e0000"
+Name: lke212553-307295-5541798e0000
+Roles:
+Labels: beta.kubernetes.io/arch=amd64
+ beta.kubernetes.io/instance-type=g6-dedicated-4
+ beta.kubernetes.io/os=linux
+ disktype=ssd
+ failure-domain.beta.kubernetes.io/region=ap-south
+ kubernetes.io/arch=amd64
+ kubernetes.io/hostname=lke212553-307295-5541798e0000
+ kubernetes.io/os=linux
+ lke.linode.com/pool-id=307295
+ node.k8s.linode.com/host-uuid=75cfe3dbbb0380f1727efc53f5192897485e95d5
+ node.kubernetes.io/instance-type=g6-dedicated-4
+ topology.kubernetes.io/region=ap-south
+ topology.linode.com/region=ap-south
+```
+Along with the `disktype=ssd` label we’ve just added, you can see other labels such as `beta.kubernetes.io/arch` or `kubernetes.io/hostname`. These are all default labels attached to Kubernetes nodes.
+
+Now let's create a druid with this new label as nodeSelector. Below is the yaml we are going to apply:
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-node-selector
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ coordinators:
+ podTemplate:
+ spec:
+ nodeSelector:
+ disktype: ssd
+ deletionPolicy: Delete
+```
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/configuration/podtemplating/yamls/druid-node-selector.yaml
+druid.kubedb.com/druid-node-selector created
+```
+Now, wait a few minutes. KubeDB operator will create necessary petset, services, secret etc. If everything goes well, we will see that the `druid-node-selector` instance is in `Ready` state.
+
+```bash
+$ kubectl get druid -n demo
+NAME TYPE VERSION STATUS AGE
+druid-node-selector kubedb.com/v1alpha2 28.0.1 Ready 54m
+```
+You can verify that by running `kubectl get pods -n demo druid-node-selector-0 -o wide` and looking at the “NODE” to which the Pod was assigned.
+```bash
+$ kubectl get pods -n demo druid-node-selector-cooridnators-0 -o wide
+NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
+druid-node-selector-cooridnators-0 1/1 Running 0 3m19s 10.2.1.7 lke212553-307295-5541798e0000
+```
+We can successfully verify that our pod was scheduled to our desired node.
+
+## Using Taints and Tolerations
+
+Here in this example we will use [Taints and Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) to schedule our druid pod to a specific node and also prevent from scheduling to nodes. Applying taints and tolerations to the Pod involves several steps. Let’s find what nodes exist in your cluster. To get the name of these nodes, you can run:
+
+```bash
+$ kubectl get nodes --show-labels
+NAME STATUS ROLES AGE VERSION LABELS
+lke212553-307295-339173d10000 Ready 36m v1.30.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=g6-dedicated-4,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=ap-south,kubernetes.io/arch=amd64,kubernetes.io/hostname=lke212553-307295-339173d10000,kubernetes.io/os=linux,lke.linode.com/pool-id=307295,node.k8s.linode.com/host-uuid=618158120a299c6fd37f00d01d355ca18794c467,node.kubernetes.io/instance-type=g6-dedicated-4,topology.kubernetes.io/region=ap-south,topology.linode.com/region=ap-south
+lke212553-307295-5541798e0000 Ready 36m v1.30.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=g6-dedicated-4,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=ap-south,kubernetes.io/arch=amd64,kubernetes.io/hostname=lke212553-307295-5541798e0000,kubernetes.io/os=linux,lke.linode.com/pool-id=307295,node.k8s.linode.com/host-uuid=75cfe3dbbb0380f1727efc53f5192897485e95d5,node.kubernetes.io/instance-type=g6-dedicated-4,topology.kubernetes.io/region=ap-south,topology.linode.com/region=ap-south
+lke212553-307295-5b53c5520000 Ready 36m v1.30.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=g6-dedicated-4,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=ap-south,kubernetes.io/arch=amd64,kubernetes.io/hostname=lke212553-307295-5b53c5520000,kubernetes.io/os=linux,lke.linode.com/pool-id=307295,node.k8s.linode.com/host-uuid=792bac078d7ce0e548163b9423416d7d8c88b08f,node.kubernetes.io/instance-type=g6-dedicated-4,topology.kubernetes.io/region=ap-south,topology.linode.com/region=ap-south
+```
+As you see, we have three nodes in the cluster: lke212553-307295-339173d10000, lke212553-307295-5541798e0000, and lke212553-307295-5b53c5520000.
+
+Next, we are going to taint these nodes.
+```bash
+$ kubectl taint nodes lke212553-307295-339173d10000 key1=node1:NoSchedule
+node/lke212553-307295-339173d10000 tainted
+
+$ kubectl taint nodes lke212553-307295-5541798e0000 key1=node2:NoSchedule
+node/lke212553-307295-5541798e0000 tainted
+
+$ kubectl taint nodes lke212553-307295-5b53c5520000 key1=node3:NoSchedule
+node/lke212553-307295-5b53c5520000 tainted
+```
+Let's see our tainted nodes here,
+```bash
+$ kubectl get nodes -o json | jq -r '.items[] | select(.spec.taints != null) | .metadata.name, .spec.taints'
+lke212553-307295-339173d10000
+[
+ {
+ "effect": "NoSchedule",
+ "key": "key1",
+ "value": "node1"
+ }
+]
+lke212553-307295-5541798e0000
+[
+ {
+ "effect": "NoSchedule",
+ "key": "key1",
+ "value": "node2"
+ }
+]
+lke212553-307295-5b53c5520000
+[
+ {
+ "effect": "NoSchedule",
+ "key": "key1",
+ "value": "node3"
+ }
+]
+```
+We can see that our taints were successfully assigned. Now let's try to create a druid without proper tolerations. Here is the yaml of druid we are going to create.
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-without-tolerations
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ deletionPolicy: Delete
+```
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/configuration/podtemplating/yamls/druid-without-tolerations.yaml
+druid.kubedb.com/druid-without-tolerations created
+```
+Now, wait a few minutes. KubeDB operator will create necessary petset, services, secret etc. If everything goes well, we will see that a pod with the name `druid-without-tolerations-0` has been created and running.
+
+Check that the petset's pod is running or not,
+```bash
+$ kubectl get pods -n demo -l app.kubernetes.io/instance=druid-without-tolerations
+NAME READY STATUS RESTARTS AGE
+druid-without-tolerations-brokers-0 0/1 Pending 0 3m35s
+druid-without-tolerations-cooridnators-0 0/1 Pending 0 3m35s
+druid-without-tolerations-historicals-0 0/1 Pending 0 3m35s
+druid-without-tolerations-middlemanager-0 0/1 Pending 0 3m35s
+druid-without-tolerations-routers-0 0/1 Pending 0 3m35s
+```
+Here we can see that the pod is not running. So let's describe the pod,
+```bash
+$ kubectl describe pods -n demo druid-without-tolerations-coordinators-0
+Name: druid-without-tolerations-coordinators-0
+Namespace: demo
+Priority: 0
+Service Account: default
+Node: kind-control-plane/172.18.0.2
+Start Time: Wed, 13 Nov 2024 11:59:06 +0600
+Labels: app.kubernetes.io/component=database
+ app.kubernetes.io/instance=druid-without-tolerations
+ app.kubernetes.io/managed-by=kubedb.com
+ app.kubernetes.io/name=druids.kubedb.com
+ apps.kubernetes.io/pod-index=0
+ controller-revision-hash=druid-without-tolerations-coordinators-65c8c99fc7
+ kubedb.com/role=coordinators
+ statefulset.kubernetes.io/pod-name=druid-without-tolerations-coordinators-0
+Annotations:
+Status: Running
+IP: 10.244.0.53
+IPs:
+ IP: 10.244.0.53
+Controlled By: PetSet/druid-without-tolerations-coordinators
+Init Containers:
+ init-druid:
+ Container ID: containerd://62c9a2053d619dded2085e354cd2c0dfa238761033cc0483c824c1ed8ee4c002
+ Image: ghcr.io/kubedb/druid-init:28.0.1@sha256:ed87835bc0f89dea923fa8e3cf1ef209e3e41cb93944a915289322035dcd8a91
+ Image ID: ghcr.io/kubedb/druid-init@sha256:ed87835bc0f89dea923fa8e3cf1ef209e3e41cb93944a915289322035dcd8a91
+ Port:
+ Host Port:
+ State: Terminated
+ Reason: Completed
+ Exit Code: 0
+ Started: Wed, 13 Nov 2024 11:59:07 +0600
+ Finished: Wed, 13 Nov 2024 11:59:07 +0600
+ Ready: True
+ Restart Count: 0
+ Limits:
+ memory: 512Mi
+ Requests:
+ cpu: 200m
+ memory: 512Mi
+ Environment:
+ DRUID_METADATA_TLS_ENABLE: false
+ DRUID_METADATA_STORAGE_TYPE: MySQL
+ Mounts:
+ /opt/druid/conf from main-config-volume (rw)
+ /opt/druid/extensions/mysql-metadata-storage from mysql-metadata-storage (rw)
+ /tmp/config/custom-config from custom-config (rw)
+ /tmp/config/operator-config from operator-config-volume (rw)
+ /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9t5kp (ro)
+Containers:
+ druid:
+ Container ID: containerd://3a52f120ca09f90fcdc062c94bf404964add7a5b6ded4a372400267a9d0fd598
+ Image: ghcr.io/appscode-images/druid:28.0.1@sha256:d86e424233ec5a120c1e072cf506fa169868fd9572bbb9800a85400f0c879dec
+ Image ID: ghcr.io/appscode-images/druid@sha256:d86e424233ec5a120c1e072cf506fa169868fd9572bbb9800a85400f0c879dec
+ Port: 8081/TCP
+ Host Port: 0/TCP
+ Command:
+ /druid.sh
+ coordinator
+ State: Running
+ Started: Wed, 13 Nov 2024 11:59:09 +0600
+ Ready: True
+ Restart Count: 0
+ Limits:
+ cpu: 600m
+ memory: 2Gi
+ Requests:
+ cpu: 600m
+ memory: 2Gi
+ Environment:
+ DRUID_ADMIN_PASSWORD: Optional: false
+ DRUID_METADATA_STORAGE_PASSWORD: VHJ6!hFuT8WDjcyy
+ DRUID_ZK_SERVICE_PASSWORD: VHJ6!hFuT8WDjcyy
+ Mounts:
+ /opt/druid/conf from main-config-volume (rw)
+ /opt/druid/extensions/mysql-metadata-storage from mysql-metadata-storage (rw)
+ /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9t5kp (ro)
+
+Conditions:
+ Type Status
+ PodReadyToStartContainers True
+ Initialized True
+ Ready True
+ ContainersReady True
+ PodScheduled True
+Volumes:
+ data:
+ Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
+ ClaimName: data-druid-without-tolerations-0
+ ReadOnly: false
+ init-scripts:
+ Type: EmptyDir (a temporary directory that shares a pod's lifetime)
+ Medium:
+ SizeLimit:
+ kube-api-access-htm2z:
+ Type: Projected (a volume that contains injected data from multiple sources)
+ TokenExpirationSeconds: 3607
+ ConfigMapName: kube-root-ca.crt
+ ConfigMapOptional:
+ DownwardAPI: true
+QoS Class: Burstable
+Node-Selectors:
+Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
+ node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
+Topology Spread Constraints: kubernetes.io/hostname:ScheduleAnyway when max skew 1 is exceeded for selector app.kubernetes.io/component=database,app.kubernetes.io/instance=druid-without-tolerations,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=druids.kubedb.com,kubedb.com/petset=standalone
+ topology.kubernetes.io/zone:ScheduleAnyway when max skew 1 is exceeded for selector app.kubernetes.io/component=database,app.kubernetes.io/instance=druid-without-tolerations,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=druids.kubedb.com,kubedb.com/petset=standalone
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Warning FailedScheduling 5m20s default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {key1: node1}, 1 node(s) had untolerated taint {key1: node2}, 1 node(s) had untolerated taint {key1: node3}. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
+ Warning FailedScheduling 11s default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {key1: node1}, 1 node(s) had untolerated taint {key1: node2}, 1 node(s) had untolerated taint {key1: node3}. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
+ Normal NotTriggerScaleUp 13s (x31 over 5m15s) cluster-autoscaler pod didn't trigger scale-up:
+```
+Here we can see that the pod has no tolerations for the tainted nodes and because of that the pod is not able to scheduled.
+
+So, let's add proper tolerations and create another druid. Here is the yaml we are going to apply,
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ podTemplate:
+ spec:
+ tolerations:
+ - key: "key1"
+ operator: "Equal"
+ value: "node1"
+ effect: "NoSchedule"
+ replicas: 1
+ coordinators:
+ podTemplate:
+ spec:
+ tolerations:
+ - key: "key1"
+ operator: "Equal"
+ value: "node1"
+ effect: "NoSchedule"
+ replicas: 1
+ brokers:
+ podTemplate:
+ spec:
+ tolerations:
+ - key: "key1"
+ operator: "Equal"
+ value: "node1"
+ effect: "NoSchedule"
+ replicas: 1
+ historicals:
+ podTemplate:
+ spec:
+ tolerations:
+ - key: "key1"
+ operator: "Equal"
+ value: "node1"
+ effect: "NoSchedule"
+ replicas: 1
+ middleManagers:
+ podTemplate:
+ spec:
+ tolerations:
+ - key: "key1"
+ operator: "Equal"
+ value: "node1"
+ effect: "NoSchedule"
+ replicas: 1
+ deletionPolicy: Delete
+```
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/configuration/podtemplating/yamls/druid-with-tolerations.yaml
+druid.kubedb.com/druid-with-tolerations created
+```
+Now, wait a few minutes. KubeDB operator will create necessary petset, services, secret etc. If everything goes well, we will see that a pod with the name `druid-with-tolerations-0` has been created.
+
+Check that the petset's pod is running
+
+```bash
+$ $ kubectl get pods -n demo -l app.kubernetes.io/instance=druid-cluster
+
+NAME READY STATUS RESTARTS AGE
+druid-with-tolerations-brokers-0 1/1 Running 0 164m
+druid-with-tolerations-coordinators-0 1/1 Running 0 164m
+druid-with-tolerations-historicals-0 1/1 Running 0 164m
+druid-with-tolerations-middlemanagers-0 1/1 Running 0 164m
+druid-with-tolerations-routers-0 1/1 Running 0 164m
+```
+As we see the pod is running, you can verify that by running `kubectl get pods -n demo druid-with-tolerations-0 -o wide` and looking at the “NODE” to which the Pod was assigned.
+```bash
+$ kubectl get pods -n demo druid-with-tolerations-coordinators-0 -o wide
+NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
+druid-with-tolerations-coordinators-0 1/1 Running 0 3m49s 10.2.0.8 lke212553-307295-339173d10000
+```
+We can successfully verify that our pod was scheduled to the node which it has tolerations.
+
+## Cleaning up
+
+To cleanup the Kubernetes resources created by this tutorial, run:
+
+```bash
+kubectl delete druid -n demo druid-with-tolerations
+
+kubectl delete ns demo
+```
+
+If you would like to uninstall KubeDB operator, please follow the steps [here](/docs/setup/README.md).
+
+## Next Steps
+
+- [Quickstart Druid](/docs/guides/druid/quickstart/guide/index.md) with KubeDB Operator.
+- Detail concepts of [Druid object](/docs/guides/druid/concepts/druid.md).
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
diff --git a/docs/guides/druid/configuration/podtemplating/yamls/deep-storage-config.yaml b/docs/guides/druid/configuration/podtemplating/yamls/deep-storage-config.yaml
new file mode 100644
index 0000000000..3612595828
--- /dev/null
+++ b/docs/guides/druid/configuration/podtemplating/yamls/deep-storage-config.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+
diff --git a/docs/guides/druid/configuration/podtemplating/yamls/druid-cluster.yaml b/docs/guides/druid/configuration/podtemplating/yamls/druid-cluster.yaml
new file mode 100644
index 0000000000..2004002096
--- /dev/null
+++ b/docs/guides/druid/configuration/podtemplating/yamls/druid-cluster.yaml
@@ -0,0 +1,43 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster
+ namespace: demo
+spec:
+ version: 28.0.1
+ configSecret:
+ name: config-secret
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ coordinators:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: druid
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ brokers:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: druid
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ routers:
+ replicas: 1
+ deletionPolicy: WipeOut
diff --git a/docs/guides/druid/configuration/podtemplating/yamls/druid-node-selector.yaml b/docs/guides/druid/configuration/podtemplating/yamls/druid-node-selector.yaml
new file mode 100644
index 0000000000..7ad2eae717
--- /dev/null
+++ b/docs/guides/druid/configuration/podtemplating/yamls/druid-node-selector.yaml
@@ -0,0 +1,20 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-node-selector
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ coordinators:
+ podTemplate:
+ spec:
+ nodeSelector:
+ disktype: ssd
+ deletionPolicy: Delete
\ No newline at end of file
diff --git a/docs/guides/druid/configuration/podtemplating/yamls/druid-with-tolerations.yaml b/docs/guides/druid/configuration/podtemplating/yamls/druid-with-tolerations.yaml
new file mode 100644
index 0000000000..4ef158f85a
--- /dev/null
+++ b/docs/guides/druid/configuration/podtemplating/yamls/druid-with-tolerations.yaml
@@ -0,0 +1,58 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ podTemplate:
+ spec:
+ tolerations:
+ - key: "key1"
+ operator: "Equal"
+ value: "node1"
+ effect: "NoSchedule"
+ replicas: 1
+ coordinators:
+ podTemplate:
+ spec:
+ tolerations:
+ - key: "key1"
+ operator: "Equal"
+ value: "node1"
+ effect: "NoSchedule"
+ replicas: 1
+ brokers:
+ podTemplate:
+ spec:
+ tolerations:
+ - key: "key1"
+ operator: "Equal"
+ value: "node1"
+ effect: "NoSchedule"
+ replicas: 1
+ historicals:
+ podTemplate:
+ spec:
+ tolerations:
+ - key: "key1"
+ operator: "Equal"
+ value: "node1"
+ effect: "NoSchedule"
+ replicas: 1
+ middleManagers:
+ podTemplate:
+ spec:
+ tolerations:
+ - key: "key1"
+ operator: "Equal"
+ value: "node1"
+ effect: "NoSchedule"
+ replicas: 1
+ deletionPolicy: Delete
diff --git a/docs/guides/druid/configuration/podtemplating/yamls/druid-without-tolerations.yaml b/docs/guides/druid/configuration/podtemplating/yamls/druid-without-tolerations.yaml
new file mode 100644
index 0000000000..1098f3d70d
--- /dev/null
+++ b/docs/guides/druid/configuration/podtemplating/yamls/druid-without-tolerations.yaml
@@ -0,0 +1,15 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-without-tolerations
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ deletionPolicy: Delete
diff --git a/docs/guides/druid/monitoring/_index.md b/docs/guides/druid/monitoring/_index.md
new file mode 100755
index 0000000000..dd0ffbb60c
--- /dev/null
+++ b/docs/guides/druid/monitoring/_index.md
@@ -0,0 +1,10 @@
+---
+title: Monitor Druid with Prometheus & Grafana
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-monitoring
+ name: Monitoring
+ parent: guides-druid
+ weight: 140
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/druid/monitoring/images/druid-monitoring.png b/docs/guides/druid/monitoring/images/druid-monitoring.png
new file mode 100644
index 0000000000..2c609a2b98
Binary files /dev/null and b/docs/guides/druid/monitoring/images/druid-monitoring.png differ
diff --git a/docs/guides/druid/monitoring/images/druid-prometheus.png b/docs/guides/druid/monitoring/images/druid-prometheus.png
new file mode 100644
index 0000000000..5d6d83ed1b
Binary files /dev/null and b/docs/guides/druid/monitoring/images/druid-prometheus.png differ
diff --git a/docs/guides/druid/monitoring/overview.md b/docs/guides/druid/monitoring/overview.md
new file mode 100644
index 0000000000..752599f2e8
--- /dev/null
+++ b/docs/guides/druid/monitoring/overview.md
@@ -0,0 +1,139 @@
+---
+title: Druid Monitoring Overview
+description: Druid Monitoring Overview
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-monitoring-guide
+ name: Overview
+ parent: guides-druid-monitoring
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Monitoring Apache Druid with KubeDB
+
+KubeDB has native support for monitoring via [Prometheus](https://prometheus.io/). You can use builtin [Prometheus](https://github.com/prometheus/prometheus) scraper or [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) to monitor KubeDB managed databases. This tutorial will show you how database monitoring works with KubeDB and how to configure Database crd to enable monitoring.
+
+## Overview
+
+KubeDB uses Prometheus [exporter](https://prometheus.io/docs/instrumenting/exporters/#databases) images to export Prometheus metrics for respective databases. As KubeDB supports Druid versions in KRaft mode, and the officially recognized exporter image doesn't expose metrics for them yet - KubeDB managed Druid instances use [JMX Exporter](https://github.com/prometheus/jmx_exporter) instead. This exporter is intended to be run as a Java Agent inside Druid container, exposing a HTTP server and serving metrics of the local JVM. To Following diagram shows the logical flow of database monitoring with KubeDB.
+
+
+
+
+
+When a user creates a Druid crd with `spec.monitor` section configured, KubeDB operator provisions the respective Druid cluster while running the exporter as a Java agent inside the druid containers. It also creates a dedicated stats service with name `{database-crd-name}-stats` for monitoring. Prometheus server can scrape metrics using this stats service.
+
+## Configure Monitoring
+
+In order to enable monitoring for a database, you have to configure `spec.monitor` section. KubeDB provides following options to configure `spec.monitor` section:
+
+| Field | Type | Uses |
+|----------------------------------------------------|------------|-----------------------------------------------------------------------------------------------------------------------------------------|
+| `spec.monitor.agent` | `Required` | Type of the monitoring agent that will be used to monitor this database. It can be `prometheus.io/builtin` or `prometheus.io/operator`. |
+| `spec.monitor.prometheus.exporter.port` | `Optional` | Port number where the exporter side car will serve metrics. |
+| `spec.monitor.prometheus.exporter.args` | `Optional` | Arguments to pass to the exporter sidecar. |
+| `spec.monitor.prometheus.exporter.env` | `Optional` | List of environment variables to set in the exporter sidecar container. |
+| `spec.monitor.prometheus.exporter.resources` | `Optional` | Resources required by exporter sidecar container. |
+| `spec.monitor.prometheus.exporter.securityContext` | `Optional` | Security options the exporter should run with. |
+| `spec.monitor.prometheus.serviceMonitor.labels` | `Optional` | Labels for `ServiceMonitor` crd. |
+| `spec.monitor.prometheus.serviceMonitor.interval` | `Optional` | Interval at which metrics should be scraped. |
+
+## Sample Configuration
+
+A sample YAML for TLS secured Druid crd with `spec.monitor` section configured to enable monitoring with [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) is shown below.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-with-monitoring
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ monitor:
+ agent: prometheus.io/operator
+ prometheus:
+ serviceMonitor:
+ labels:
+ release: prometheus
+ interval: 10s
+ deletionPolicy: WipeOut
+```
+
+### Create External Dependency (Deep Storage)
+
+Before proceeding further, we need to prepare deep storage, which is one of the external dependency of Druid and used for storing the segments. It is a storage mechanism that Apache Druid does not provide. **Amazon S3**, **Google Cloud Storage**, or **Azure Blob Storage**, **S3-compatible storage** (like **Minio**), or **HDFS** are generally convenient options for deep storage.
+
+In this tutorial, we will run a `minio-server` as deep storage in our local `kind` cluster using `minio-operator` and create a bucket named `druid` in it, which the deployed druid database will use.
+
+```bash
+
+$ helm repo add minio https://operator.min.io/
+$ helm repo update minio
+$ helm upgrade --install --namespace "minio-operator" --create-namespace "minio-operator" minio/operator --set operator.replicaCount=1
+
+$ helm upgrade --install --namespace "demo" --create-namespace druid-minio minio/tenant \
+--set tenant.pools[0].servers=1 \
+--set tenant.pools[0].volumesPerServer=1 \
+--set tenant.pools[0].size=1Gi \
+--set tenant.certificate.requestAutoCert=false \
+--set tenant.buckets[0].name="druid" \
+--set tenant.pools[0].name="default"
+
+```
+
+Now we need to create a `Secret` named `deep-storage-config`. It contains the necessary connection information using which the druid database will connect to the deep storage.
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+```
+
+Let’s create the `deep-storage-config` Secret shown above:
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/monitoring/yamls/deep-storage-config.yaml
+secret/deep-storage-config created
+```
+
+Let's deploy the above druid example by the following command:
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/monitoring/yamls/druid-with-monitoring.yaml
+druid.kubedb.com/druid created
+```
+
+Here, we have specified that we are going to monitor this server using Prometheus operator through `spec.monitor.agent: prometheus.io/operator`. KubeDB will create a `ServiceMonitor` crd in databases namespace and this `ServiceMonitor` will have `release: prometheus` label.
+
+## Next Steps
+
+- Learn how to use KubeDB to run a Apache Druid cluster [here](/docs/guides/druid/README.md).
+- Deploy [dedicated topology cluster](/docs/guides/druid/clustering/overview/index.md) for Apache Druid
+- Detail concepts of [DruidVersion object](/docs/guides/druid/concepts/druidversion.md).
+
+[//]: # (- Learn to use KubeDB managed Druid objects using [CLIs](/docs/guides/druid/cli/cli.md).)
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
\ No newline at end of file
diff --git a/docs/guides/druid/monitoring/using-builtin-prometheus.md b/docs/guides/druid/monitoring/using-builtin-prometheus.md
new file mode 100644
index 0000000000..69ec2aa62e
--- /dev/null
+++ b/docs/guides/druid/monitoring/using-builtin-prometheus.md
@@ -0,0 +1,372 @@
+---
+title: Monitor Druid using Builtin Prometheus Discovery
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-monitoring-builtin-monitoring
+ name: Builtin Prometheus
+ parent: guides-druid-monitoring
+ weight: 20
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Monitoring Druid with builtin Prometheus
+
+This tutorial will show you how to monitor Druid cluster using builtin [Prometheus](https://github.com/prometheus/prometheus) scraper.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Install KubeDB operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- If you are not familiar with how to configure Prometheus to scrape metrics from various Kubernetes resources, please read the tutorial from [here](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin).
+
+- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/guides/druid/monitoring/overview.md).
+
+- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy respective monitoring resources. We are going to deploy database in `demo` namespace.
+
+ ```bash
+ $ kubectl create ns monitoring
+ namespace/monitoring created
+
+ $ kubectl create ns demo
+ namespace/demo created
+ ```
+
+> Note: YAML files used in this tutorial are stored in [docs/examples/druid](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/druid) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
+
+## Deploy Druid with Monitoring Enabled
+
+At first, let's deploy a Druid cluster with monitoring enabled. Below is the Druid object that we are going to create.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-with-monitoring
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ monitor:
+ agent: prometheus.io/builtin
+ prometheus:
+ exporter:
+ port: 56790
+ serviceMonitor:
+ labels:
+ release: prometheus
+ interval: 10s
+ deletionPolicy: WipeOut
+```
+
+Here,
+
+- `spec.monitor.agent: prometheus.io/builtin` specifies that we are going to monitor this server using builtin Prometheus scraper.
+- `spec.monitor.prometheus.exporter.port: 56790` specifies the port where the exporter is running.
+
+Let's create the Druid crd we have shown above.
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/monitoring/yamls/druid-builtin-monitoring.yaml
+druid.kubedb.com/druid-with-monitoring created
+```
+
+Now, wait for the cluster to go into `Ready` state.
+
+```bash
+NAME TYPE VERSION STATUS AGE
+druid-with-monitoring kubedb.com/v1alpha2 28.0.1 Ready 31s
+```
+
+KubeDB will create a separate stats service with name `{Druid crd name}-stats` for monitoring purpose.
+
+```bash
+$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=druid-with-monitoring"
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+druid-with-monitoring-brokers ClusterIP 10.96.28.252 8082/TCP 2m13s
+druid-with-monitoring-coordinators ClusterIP 10.96.52.186 8081/TCP 2m13s
+druid-with-monitoring-pods ClusterIP None 8081/TCP,8090/TCP,8083/TCP,8091/TCP,8082/TCP,8888/TCP 2m13s
+druid-with-monitoring-routers ClusterIP 10.96.134.202 8888/TCP 2m13s
+druid-with-monitoring-stats ClusterIP 10.96.222.96 56790/TCP 2m13s
+```
+
+Here, `druid-with-monitoring-stats` service has been created for monitoring purpose. Let's describe the service.
+
+```bash
+$ kubectl describe svc -n druid-demo builtin-prom-stats
+Name: druid-with-monitoring-stats
+Namespace: demo
+Labels: app.kubernetes.io/component=database
+ app.kubernetes.io/instance=druid-with-monitoring
+ app.kubernetes.io/managed-by=kubedb.com
+ app.kubernetes.io/name=druids.kubedb.com
+ kubedb.com/role=stats
+Annotations: monitoring.appscode.com/agent: prometheus.io/builtin
+ prometheus.io/path: /metrics
+ prometheus.io/port: 56790
+ prometheus.io/scrape: true
+Selector: app.kubernetes.io/instance=druid-with-monitoring,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=druids.kubedb.com
+Type: ClusterIP
+IP Family Policy: SingleStack
+IP Families: IPv4
+IP: 10.96.222.96
+IPs: 10.96.222.96
+Port: metrics 56790/TCP
+TargetPort: metrics/TCP
+Endpoints: 10.244.0.31:56790,10.244.0.33:56790
+Session Affinity: None
+Events:
+```
+
+You can see that the service contains following annotations.
+
+```bash
+prometheus.io/path: /metrics
+prometheus.io/port: 56790
+prometheus.io/scrape: true
+```
+
+The Prometheus server will discover the service endpoint using these specifications and will scrape metrics from the exporter.
+
+## Configure Prometheus Server
+
+Now, we have to configure a Prometheus scraping job to scrape the metrics using this service. We are going to configure scraping job similar to this [kubernetes-service-endpoints](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin#kubernetes-service-endpoints) job that scrapes metrics from endpoints of a service.
+
+Let's configure a Prometheus scraping job to collect metrics from this service.
+
+```yaml
+- job_name: 'kubedb-databases'
+ honor_labels: true
+ scheme: http
+ kubernetes_sd_configs:
+ - role: endpoints
+ # by default Prometheus server select all Kubernetes services as possible target.
+ # relabel_config is used to filter only desired endpoints
+ relabel_configs:
+ # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations
+ - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port]
+ separator: ;
+ regex: true;(.*)
+ action: keep
+ # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme.
+ - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
+ action: drop
+ regex: https
+ # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix
+ - source_labels: [__meta_kubernetes_service_name]
+ separator: ;
+ regex: (.*-stats)
+ action: keep
+ # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations.
+ - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name]
+ separator: ;
+ regex: (.*)
+ action: keep
+ # read the metric path from "prometheus.io/path: " annotation
+ - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
+ action: replace
+ target_label: __metrics_path__
+ regex: (.+)
+ # read the port from "prometheus.io/port: " annotation and update scraping address accordingly
+ - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
+ action: replace
+ target_label: __address__
+ regex: ([^:]+)(?::\d+)?;(\d+)
+ replacement: $1:$2
+ # add service namespace as label to the scraped metrics
+ - source_labels: [__meta_kubernetes_namespace]
+ separator: ;
+ regex: (.*)
+ target_label: namespace
+ replacement: $1
+ action: replace
+ # add service name as a label to the scraped metrics
+ - source_labels: [__meta_kubernetes_service_name]
+ separator: ;
+ regex: (.*)
+ target_label: service
+ replacement: $1
+ action: replace
+ # add stats service's labels to the scraped metrics
+ - action: labelmap
+ regex: __meta_kubernetes_service_label_(.+)
+```
+
+### Configure Existing Prometheus Server
+
+If you already have a Prometheus server running, you have to add above scraping job in the `ConfigMap` used to configure the Prometheus server. Then, you have to restart it for the updated configuration to take effect.
+
+>If you don't use a persistent volume for Prometheus storage, you will lose your previously scraped data on restart.
+
+### Deploy New Prometheus Server
+
+If you don't have any existing Prometheus server running, you have to deploy one. In this section, we are going to deploy a Prometheus server in `monitoring` namespace to collect metrics using this stats service.
+
+**Create ConfigMap:**
+
+At first, create a ConfigMap with the scraping configuration. Bellow, the YAML of ConfigMap that we are going to create in this tutorial.
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: prometheus-config
+ labels:
+ app: prometheus-demo
+ namespace: monitoring
+data:
+ prometheus.yml: |-
+ global:
+ scrape_interval: 5s
+ evaluation_interval: 5s
+ scrape_configs:
+ - job_name: 'kubedb-databases'
+ honor_labels: true
+ scheme: http
+ kubernetes_sd_configs:
+ - role: endpoints
+ # by default Prometheus server select all Kubernetes services as possible target.
+ # relabel_config is used to filter only desired endpoints
+ relabel_configs:
+ # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations
+ - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port]
+ separator: ;
+ regex: true;(.*)
+ action: keep
+ # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme.
+ - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
+ action: drop
+ regex: https
+ # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix
+ - source_labels: [__meta_kubernetes_service_name]
+ separator: ;
+ regex: (.*-stats)
+ action: keep
+ # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations.
+ - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name]
+ separator: ;
+ regex: (.*)
+ action: keep
+ # read the metric path from "prometheus.io/path: " annotation
+ - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
+ action: replace
+ target_label: __metrics_path__
+ regex: (.+)
+ # read the port from "prometheus.io/port: " annotation and update scraping address accordingly
+ - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
+ action: replace
+ target_label: __address__
+ regex: ([^:]+)(?::\d+)?;(\d+)
+ replacement: $1:$2
+ # add service namespace as label to the scraped metrics
+ - source_labels: [__meta_kubernetes_namespace]
+ separator: ;
+ regex: (.*)
+ target_label: namespace
+ replacement: $1
+ action: replace
+ # add service name as a label to the scraped metrics
+ - source_labels: [__meta_kubernetes_service_name]
+ separator: ;
+ regex: (.*)
+ target_label: service
+ replacement: $1
+ action: replace
+ # add stats service's labels to the scraped metrics
+ - action: labelmap
+ regex: __meta_kubernetes_service_label_(.+)
+```
+
+Let's create above `ConfigMap`,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/monitoring/builtin-prometheus/prom-config.yaml
+configmap/prometheus-config created
+```
+
+**Create RBAC:**
+
+If you are using an RBAC enabled cluster, you have to give necessary RBAC permissions for Prometheus. Let's create necessary RBAC stuffs for Prometheus,
+
+```bash
+$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/rbac.yaml
+clusterrole.rbac.authorization.k8s.io/prometheus created
+serviceaccount/prometheus created
+clusterrolebinding.rbac.authorization.k8s.io/prometheus created
+```
+
+>YAML for the RBAC resources created above can be found [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/rbac.yaml).
+
+**Deploy Prometheus:**
+
+Now, we are ready to deploy Prometheus server. We are going to use following [deployment](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/deployment.yaml) to deploy Prometheus server.
+
+Let's deploy the Prometheus server.
+
+```bash
+$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/deployment.yaml
+deployment.apps/prometheus created
+```
+
+### Verify Monitoring Metrics
+
+Prometheus server is listening to port `9090`. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard.
+
+At first, let's check if the Prometheus pod is in `Running` state.
+
+```bash
+$ kubectl get pod -n monitoring -l=app=prometheus
+NAME READY STATUS RESTARTS AGE
+prometheus-7bd56c6865-8dlpv 1/1 Running 0 28s
+```
+
+Now, run following command on a separate terminal to forward 9090 port of `prometheus-7bd56c6865-8dlpv` pod,
+
+```bash
+$ kubectl port-forward -n monitoring prometheus-7bd56c6865-8dlpv 9090
+Forwarding from 127.0.0.1:9090 -> 9090
+Forwarding from [::1]:9090 -> 9090
+```
+
+Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see the endpoint of `druid-with-monitoring-stats` service as one of the targets.
+
+
+
+
+
+Check the labels marked with red rectangle. These labels confirm that the metrics are coming from `Druid` cluster `druid-with-monitoring` through stats service `druid-with-monitoring-stats`.
+
+Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics.
+
+## Cleaning up
+
+To cleanup the Kubernetes resources created by this tutorial, run following commands
+
+```bash
+kubectl delete -n demo druid/druid-with-monitoring
+
+kubectl delete -n monitoring deployment.apps/prometheus
+
+kubectl delete -n monitoring clusterrole.rbac.authorization.k8s.io/prometheus
+kubectl delete -n monitoring serviceaccount/prometheus
+kubectl delete -n monitoring clusterrolebinding.rbac.authorization.k8s.io/prometheus
+
+kubectl delete ns demo
+kubectl delete ns monitoring
+```
+
+## Next Steps
+
+- Learn how to configure [Druid Topology](/docs/guides/druid/clustering/overview/index.md).
+- Monitor your Druid database with KubeDB using [`out-of-the-box` Prometheus operator](/docs/guides/druid/monitoring/using-prometheus-operator.md).
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
diff --git a/docs/guides/druid/monitoring/using-prometheus-operator.md b/docs/guides/druid/monitoring/using-prometheus-operator.md
new file mode 100644
index 0000000000..abd4bb1ee6
--- /dev/null
+++ b/docs/guides/druid/monitoring/using-prometheus-operator.md
@@ -0,0 +1,343 @@
+---
+title: Monitor Druid using Prometheus Operator
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-monitoring-operator-monitoring
+ name: Prometheus Operator
+ parent: guides-druid-monitoring
+ weight: 15
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Monitoring Druid Using Prometheus operator
+
+[Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) provides simple and Kubernetes native way to deploy and configure Prometheus server. This tutorial will show you how to use Prometheus operator to monitor Druid database deployed with KubeDB.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one locally by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/guides/druid/monitoring/overview.md).
+
+- We need a [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) instance running. If you don't already have a running instance, you can deploy one using this helm chart [here](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack).
+
+- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy the prometheus operator helm chart. Alternatively, you can use `--create-namespace` flag while deploying prometheus. We are going to deploy database in `demo` namespace.
+
+ ```bash
+ $ kubectl create ns monitoring
+ namespace/monitoring created
+
+ $ kubectl create ns demo
+ namespace/demo created
+ ```
+
+
+
+> Note: YAML files used in this tutorial are stored in [docs/examples/druid](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/druid) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
+
+## Find out required labels for ServiceMonitor
+
+We need to know the labels used to select `ServiceMonitor` by a `Prometheus` crd. We are going to provide these labels in `spec.monitor.prometheus.serviceMonitor.labels` field of Druid crd so that KubeDB creates `ServiceMonitor` object accordingly.
+
+At first, let's find out the available Prometheus server in our cluster.
+
+```bash
+$ kubectl get prometheus --all-namespaces
+NAMESPACE NAME VERSION DESIRED READY RECONCILED AVAILABLE AGE
+monitoring prometheus-kube-prometheus-prometheus v2.42.0 1 1 True True 2d23h
+```
+
+> If you don't have any Prometheus server running in your cluster, deploy one following the guide specified in **Before You Begin** section.
+
+Now, let's view the YAML of the available Prometheus server `prometheus` in `monitoring` namespace.
+
+```bash
+$ kubectl get prometheus -n monitoring prometheus-kube-prometheus-prometheus -o yaml
+apiVersion: monitoring.coreos.com/v1
+kind: Prometheus
+metadata:
+ annotations:
+ meta.helm.sh/release-name: prometheus
+ meta.helm.sh/release-namespace: monitoring
+ creationTimestamp: "2023-03-27T07:56:04Z"
+ generation: 1
+ labels:
+ app: kube-prometheus-stack-prometheus
+ app.kubernetes.io/instance: prometheus
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/part-of: kube-prometheus-stack
+ app.kubernetes.io/version: 45.7.1
+ chart: kube-prometheus-stack-45.7.1
+ heritage: Helm
+ release: prometheus
+ name: prometheus-kube-prometheus-prometheus
+ namespace: monitoring
+ resourceVersion: "638797"
+ uid: 0d1e7b8a-44ae-4794-ab45-95a5d7ae7f91
+spec:
+ alerting:
+ alertmanagers:
+ - apiVersion: v2
+ name: prometheus-kube-prometheus-alertmanager
+ namespace: monitoring
+ pathPrefix: /
+ port: http-web
+ enableAdminAPI: false
+ evaluationInterval: 30s
+ externalUrl: http://prometheus-kube-prometheus-prometheus.monitoring:9090
+ hostNetwork: false
+ image: quay.io/prometheus/prometheus:v2.42.0
+ listenLocal: false
+ logFormat: logfmt
+ logLevel: info
+ paused: false
+ podMonitorNamespaceSelector: {}
+ podMonitorSelector:
+ matchLabels:
+ release: prometheus
+ portName: http-web
+ probeNamespaceSelector: {}
+ probeSelector:
+ matchLabels:
+ release: prometheus
+ replicas: 1
+ retention: 10d
+ routePrefix: /
+ ruleNamespaceSelector: {}
+ ruleSelector:
+ matchLabels:
+ release: prometheus
+ scrapeInterval: 30s
+ securityContext:
+ fsGroup: 2000
+ runAsGroup: 2000
+ runAsNonRoot: true
+ runAsUser: 1000
+ serviceAccountName: prometheus-kube-prometheus-prometheus
+ serviceMonitorNamespaceSelector: {}
+ serviceMonitorSelector:
+ matchLabels:
+ release: prometheus
+ shards: 1
+ version: v2.42.0
+ walCompression: true
+status:
+ availableReplicas: 1
+ conditions:
+ - lastTransitionTime: "2023-03-27T07:56:23Z"
+ observedGeneration: 1
+ status: "True"
+ type: Available
+ - lastTransitionTime: "2023-03-30T03:39:18Z"
+ observedGeneration: 1
+ status: "True"
+ type: Reconciled
+ paused: false
+ replicas: 1
+ shardStatuses:
+ - availableReplicas: 1
+ replicas: 1
+ shardID: "0"
+ unavailableReplicas: 0
+ updatedReplicas: 1
+ unavailableReplicas: 0
+ updatedReplicas: 1
+```
+
+Notice the `spec.serviceMonitorSelector` section. Here, `release: prometheus` label is used to select `ServiceMonitor` crd. So, we are going to use this label in `spec.monitor.prometheus.serviceMonitor.labels` field of Druid crd.
+
+## Deploy Druid with Monitoring Enabled
+
+At first, let's deploy a Druid database with monitoring enabled. Below is the Druid object that we are going to create.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-with-monitoring
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ monitor:
+ agent: prometheus.io/operator
+ prometheus:
+ serviceMonitor:
+ labels:
+ release: prometheus
+ interval: 10s
+ deletionPolicy: WipeOut
+```
+
+Here,
+
+- `monitor.agent: prometheus.io/operator` indicates that we are going to monitor this server using Prometheus operator.
+- `monitor.prometheus.serviceMonitor.labels` specifies that KubeDB should create `ServiceMonitor` with these labels.
+- `monitor.prometheus.interval` indicates that the Prometheus server should scrape metrics from this database with 10 seconds interval.
+
+Let's create the druid object that we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/druid/monitoring/yamls/druid-with-monirtoring.yaml
+druids.kubedb.com/druid-with-monitoring created
+```
+
+Now, wait for the database to go into `Running` state.
+
+```bash
+$ kubectl get dr -n demo druid
+NAME TYPE VERSION STATUS AGE
+druid-with-monitoring kubedb.com/v1alpha2 3.6.1 Ready 2m24s
+```
+
+KubeDB will create a separate stats service with name `{Druid crd name}-stats` for monitoring purpose.
+
+```bash
+$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=druid-with-monitoring"
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+druid-with-monitoring-brokers ClusterIP 10.96.28.252 8082/TCP 2m13s
+druid-with-monitoring-coordinators ClusterIP 10.96.52.186 8081/TCP 2m13s
+druid-with-monitoring-pods ClusterIP None 8081/TCP,8090/TCP,8083/TCP,8091/TCP,8082/TCP,8888/TCP 2m13s
+druid-with-monitoring-routers ClusterIP 10.96.134.202 8888/TCP 2m13s
+druid-with-monitoring-stats ClusterIP 10.96.222.96 56790/TCP 2m13s
+```
+
+Here, `druid-with-monitoring-stats` service has been created for monitoring purpose.
+
+Let's describe this stats service.
+
+```bash
+$ kubectl describe svc -n demo druid-with-monitoring-stats
+Name: druid-with-monitoring-stats
+Namespace: demo
+Labels: app.kubernetes.io/component=database
+ app.kubernetes.io/instance=druid-with-monitoring
+ app.kubernetes.io/managed-by=kubedb.com
+ app.kubernetes.io/name=druids.kubedb.com
+ kubedb.com/role=stats
+Annotations: monitoring.appscode.com/agent: prometheus.io/operator
+Selector: app.kubernetes.io/instance=druid-with-monitoring,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=druids.kubedb.com
+Type: ClusterIP
+IP Family Policy: SingleStack
+IP Families: IPv4
+IP: 10.96.29.174
+IPs: 10.96.29.174
+Port: metrics 9104/TCP
+TargetPort: metrics/TCP
+Endpoints: 10.244.0.68:9104,10.244.0.71:9104,10.244.0.72:9104 + 2 more...
+Session Affinity: None
+Events:
+```
+
+Notice the `Labels` and `Port` fields. `ServiceMonitor` will use this information to target its endpoints.
+
+KubeDB will also create a `ServiceMonitor` crd in `demo` namespace that select the endpoints of `druid-with-monitoring-stats` service. Verify that the `ServiceMonitor` crd has been created.
+
+```bash
+$ kubectl get servicemonitor -n demo
+NAME AGE
+druid-with-monitoring-stats 4m49s
+```
+
+Let's verify that the `ServiceMonitor` has the label that we had specified in `spec.monitor` section of Druid crd.
+
+```bash
+$ kubectl get servicemonitor -n demo druid-with-monitoring-stats -o yaml
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+ creationTimestamp: "2024-11-01T10:25:14Z"
+ generation: 1
+ labels:
+ app.kubernetes.io/component: database
+ app.kubernetes.io/instance: druid-with-monitoring
+ app.kubernetes.io/managed-by: kubedb.com
+ app.kubernetes.io/name: druids.kubedb.com
+ release: prometheus
+ name: druid-with-monitoring-stats
+ namespace: demo
+ ownerReferences:
+ - apiVersion: v1
+ blockOwnerDeletion: true
+ controller: true
+ kind: Service
+ name: druid-with-monitoring-stats
+ uid: b3ae48f3-476e-4cec-95f6-f8e28538b605
+ resourceVersion: "597152"
+ uid: ff385538-eba5-48a3-91c1-1a4b15f3018a
+spec:
+ endpoints:
+ - honorLabels: true
+ interval: 10s
+ path: /metrics
+ port: metrics
+ namespaceSelector:
+ matchNames:
+ - demo
+ selector:
+ matchLabels:
+ app.kubernetes.io/component: database
+ app.kubernetes.io/instance: druid-with-monitoring
+ app.kubernetes.io/managed-by: kubedb.com
+ app.kubernetes.io/name: druids.kubedb.com
+ kubedb.com/role: stats
+```
+
+Notice that the `ServiceMonitor` has label `release: prometheus` that we had specified in Druid crd.
+
+Also notice that the `ServiceMonitor` has selector which match the labels we have seen in the `druid-with-monitoring-stats` service. It also, target the `metrics` port that we have seen in the stats service.
+
+## Verify Monitoring Metrics
+
+At first, let's find out the respective Prometheus pod for `prometheus` Prometheus server.
+
+```bash
+$ kubectl get pod -n monitoring -l=app.kubernetes.io/name=prometheus
+NAME READY STATUS RESTARTS AGE
+prometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 8 (4h27m ago) 3d
+```
+
+Prometheus server is listening to port `9090` of `prometheus-prometheus-kube-prometheus-prometheus-0` pod. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard.
+
+Run following command on a separate terminal to forward the port 9090 of `prometheus-kube-prometheus-prometheus` service which is pointing to the prometheus pod,
+
+```bash
+$ kubectl port-forward -n monitoring svc/prometheus-kube-prometheus-prometheus 9090
+Forwarding from 127.0.0.1:9090 -> 9090
+Forwarding from [::1]:9090 -> 9090
+```
+
+Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see `metrics` endpoint of `druid-with-monitoring-stats` service as one of the targets.
+
+
+
+
+
+Check the `endpoint` and `service` labels. It verifies that the target is our expected database. Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create a beautiful dashboard with collected metrics.
+
+## Cleaning up
+
+To clean up the Kubernetes resources created by this tutorial, run following commands
+
+```bash
+kubectl delete -n demo dr/druid-with-monitoring
+kubectl delete ns demo
+```
+
+## Next Steps
+
+- Learn how to use KubeDB to run Apache Druid cluster [here](/docs/guides/druid/README.md).
+- Deploy [dedicated cluster](/docs/guides/druid/clustering/overview/index.md) for Apache Druid
+[//]: # (- Deploy [combined cluster](/docs/guides/druid/clustering/combined-cluster/index.md) for Apache Druid)
+- Detail concepts of [DruidVersion object](/docs/guides/druid/concepts/druidversion.md).
+[//]: # (- Learn to use KubeDB managed Druid objects using [CLIs](/docs/guides/druid/cli/cli.md).)
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
\ No newline at end of file
diff --git a/docs/guides/druid/monitoring/yamls/deep-storage-config.yaml b/docs/guides/druid/monitoring/yamls/deep-storage-config.yaml
new file mode 100644
index 0000000000..3612595828
--- /dev/null
+++ b/docs/guides/druid/monitoring/yamls/deep-storage-config.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+
diff --git a/docs/guides/druid/monitoring/yamls/druid-monitoring-builtin.yaml b/docs/guides/druid/monitoring/yamls/druid-monitoring-builtin.yaml
new file mode 100644
index 0000000000..4962c3c536
--- /dev/null
+++ b/docs/guides/druid/monitoring/yamls/druid-monitoring-builtin.yaml
@@ -0,0 +1,24 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-with-monitoring
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ monitor:
+ agent: prometheus.io/builtin
+ prometheus:
+ exporter:
+ port: 56790
+ serviceMonitor:
+ labels:
+ release: prometheus
+ interval: 10s
+ deletionPolicy: WipeOut
diff --git a/docs/guides/druid/monitoring/yamls/druid-with-monitoring.yaml b/docs/guides/druid/monitoring/yamls/druid-with-monitoring.yaml
new file mode 100644
index 0000000000..aa91054f8f
--- /dev/null
+++ b/docs/guides/druid/monitoring/yamls/druid-with-monitoring.yaml
@@ -0,0 +1,23 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-with-monitoring
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ monitor:
+ agent: prometheus.io/operator
+ prometheus:
+ serviceMonitor:
+ labels:
+ release: prometheus
+ interval: 10s
+ deletionPolicy: WipeOut
+
diff --git a/docs/guides/druid/quickstart/_index.md b/docs/guides/druid/quickstart/_index.md
index c99d5aad28..1be2a3045b 100644
--- a/docs/guides/druid/quickstart/_index.md
+++ b/docs/guides/druid/quickstart/_index.md
@@ -5,6 +5,6 @@ menu:
identifier: guides-druid-quickstart
name: Quickstart
parent: guides-druid
- weight: 15
+ weight: 10
menu_name: docs_{{ .version }}
---
diff --git a/docs/guides/druid/quickstart/overview/index.md b/docs/guides/druid/quickstart/guide/index.md
similarity index 97%
rename from docs/guides/druid/quickstart/overview/index.md
rename to docs/guides/druid/quickstart/guide/index.md
index ee080604c8..4318742d55 100644
--- a/docs/guides/druid/quickstart/overview/index.md
+++ b/docs/guides/druid/quickstart/guide/index.md
@@ -2,8 +2,8 @@
title: Druid Quickstart
menu:
docs_{{ .version }}:
- identifier: guides-druid-quickstart-overview
- name: Overview
+ identifier: guides-druid-quickstart-guide
+ name: Druid Quickstart
parent: guides-druid-quickstart
weight: 10
menu_name: docs_{{ .version }}
@@ -24,7 +24,7 @@ This tutorial will show you how to use KubeDB to run an [Apache Druid](https://d
At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
-Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md) and make sure install with helm command including the flags `--set global.featureGates.Druid=true` to ensure **Druid CRD** and `--set global.featureGates.ZooKeeper=true` to ensure **ZooKeeper CRD** as Druid depends on ZooKeeper for external dependency.
+Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md) and make sure to include the flags `--set global.featureGates.Druid=true` to ensure **Druid CRD** and `--set global.featureGates.ZooKeeper=true` to ensure **ZooKeeper CRD** as Druid depends on ZooKeeper for external dependency with helm command.
To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial.
@@ -39,7 +39,7 @@ demo Active 9s
> Note: YAML files used in this tutorial are stored in [guides/druid/quickstart/overview/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/druid/quickstart/overview/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
-> We have designed this tutorial to demonstrate a production setup of KubeDB managed Apache Druid. If you just want to try out KubeDB, you can bypass some safety features following the tips [here](/docs/guides/druid/quickstart/overview/index.md#tips-for-testing).
+> We have designed this tutorial to demonstrate a production setup of KubeDB managed Apache Druid. If you just want to try out KubeDB, you can bypass some safety features following the tips [here](/docs/guides/druid/quickstart/guide/index.md#tips-for-testing).
## Find Available StorageClass
@@ -55,7 +55,7 @@ Here, we have `standard` StorageClass in our cluster from [Local Path Provisione
## Find Available DruidVersion
-When you install the KubeDB operator, it registers a CRD named [DruidVersion](/docs/guides/druid/concepts/catalog.md). The installation process comes with a set of tested DruidVersion objects. Let's check available DruidVersions by,
+When you install the KubeDB operator, it registers a CRD named [DruidVersion](/docs/guides/druid/concepts/druidversion.md). The installation process comes with a set of tested DruidVersion objects. Let's check available DruidVersions by,
```bash
$ kubectl get druidversion
@@ -194,7 +194,7 @@ Here,
Let's create the Druid CR that is shown above:
```bash
-$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/druid/quickstart/druid-quickstart.yaml
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/druid/quickstart/druid-with-monitoring.yaml
druid.kubedb.com/druid-quickstart created
```
diff --git a/docs/guides/druid/reconfigure-tls/_index.md b/docs/guides/druid/reconfigure-tls/_index.md
new file mode 100644
index 0000000000..f82762bb13
--- /dev/null
+++ b/docs/guides/druid/reconfigure-tls/_index.md
@@ -0,0 +1,10 @@
+---
+title: Reconfigure TLS/SSL
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-reconfigure-tls
+ name: Reconfigure TLS/SSL
+ parent: guides-druid
+ weight: 120
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/druid/reconfigure-tls/guide.md b/docs/guides/druid/reconfigure-tls/guide.md
new file mode 100644
index 0000000000..5cd0080298
--- /dev/null
+++ b/docs/guides/druid/reconfigure-tls/guide.md
@@ -0,0 +1,1539 @@
+---
+title: Reconfigure Druid TLS/SSL Encryption
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-reconfigure-tls-guide
+ name: Reconfigure Druid TLS/SSL Encryption
+ parent: guides-druid-reconfigure-tls
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Reconfigure Druid TLS/SSL (Transport Encryption)
+
+KubeDB supports reconfigure i.e. add, remove, update and rotation of TLS/SSL certificates for existing Druid database via a DruidOpsRequest. This tutorial will show you how to use KubeDB to reconfigure TLS/SSL encryption.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.0.0 or later to your cluster to manage your SSL/TLS certificates.
+
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial.
+
+ ```bash
+ $ kubectl create ns demo
+ namespace/demo created
+ ```
+
+> Note: YAML files used in this tutorial are stored in [docs/examples/druid](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/druid) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
+
+## Add TLS to a Druid database
+
+Here, We are going to create a Druid without TLS and then reconfigure the database to use TLS.
+
+### Deploy Druid without TLS
+
+In this section, we are going to deploy a Druid topology cluster without TLS. In the next few sections we will reconfigure TLS using `DruidOpsRequest` CRD. Below is the YAML of the `Druid` CR that we are going to create,
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ deletionPolicy: Delete
+```
+
+Let's create the `Druid` CR we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/reconfigure-tls/yamls/druid-cluster.yaml
+druid.kubedb.com/druid-cluster created
+```
+
+Now, wait until `druid-cluster` has status `Ready`. i.e,
+
+```bash
+$ kubectl get dr -n demo -w
+NAME TYPE VERSION STATUS AGE
+druid-cluster kubedb.com/v1alpha2 28.0.1 Provisioning 15s
+druid-cluster kubedb.com/v1alpha2 28.0.1 Provisioning 37s
+.
+.
+druid-cluster kubedb.com/v1alpha2 28.0.1 Ready 2m27s
+```
+
+Now, we can exec one druid broker pod and verify configuration that the TLS is disabled.
+
+```bash
+$ kubectl exec -it -n demo druid-cluster-coordinators-0 -- bash
+Defaulted container "druid" out of: druid, init-druid (init)
+bash-5.1$ cat conf/druid/cluster/_common/common.runtime.properties
+druid.auth.authenticator.basic.authorizerName=basic
+druid.auth.authenticator.basic.credentialsValidator.type=metadata
+druid.auth.authenticator.basic.initialAdminPassword={"type": "environment", "variable": "DRUID_ADMIN_PASSWORD"}
+druid.auth.authenticator.basic.initialInternalClientPassword=*****
+druid.auth.authenticator.basic.skipOnFailure=false
+druid.auth.authenticator.basic.type=basic
+druid.auth.authenticatorChain=["basic"]
+druid.auth.authorizer.basic.type=basic
+druid.auth.authorizers=["basic"]
+druid.emitter.logging.logLevel=info
+druid.emitter=noop
+druid.escalator.authorizerName=basic
+druid.escalator.internalClientPassword=******
+druid.escalator.internalClientUsername=druid_system
+druid.escalator.type=basic
+druid.expressions.useStrictBooleans=true
+druid.extensions.loadList=["druid-avro-extensions", "druid-kafka-indexing-service", "druid-kafka-indexing-service", "druid-datasketches", "druid-multi-stage-query", "druid-basic-security", "mysql-metadata-storage", "druid-s3-extensions"]
+druid.global.http.eagerInitialization=false
+druid.host=localhost
+druid.indexer.logs.directory=var/druid/indexing-logs
+druid.indexer.logs.type=file
+druid.indexing.doubleStorage=double
+druid.lookup.enableLookupSyncOnStartup=false
+druid.metadata.storage.connector.connectURI=jdbc:mysql://druid-cluster-mysql-metadata.demo.svc:3306/druid
+druid.metadata.storage.connector.createTables=true
+druid.metadata.storage.connector.host=localhost
+druid.metadata.storage.connector.password={"type": "environment", "variable": "DRUID_METADATA_STORAGE_PASSWORD"}
+druid.metadata.storage.connector.port=1527
+druid.metadata.storage.connector.user=root
+druid.metadata.storage.type=mysql
+druid.monitoring.monitors=["org.apache.druid.java.util.metrics.JvmMonitor", "org.apache.druid.server.metrics.ServiceStatusMonitor"]
+druid.s3.accessKey=minio
+druid.s3.enablePathStyleAccess=true
+druid.s3.endpoint.signingRegion=us-east-1
+druid.s3.endpoint.url=http://myminio-hl.demo.svc.cluster.local:9000/
+druid.s3.protocol=http
+druid.s3.secretKey=minio123
+druid.selectors.coordinator.serviceName=druid/coordinator
+druid.selectors.indexing.serviceName=druid/overlord
+druid.server.hiddenProperties=["druid.s3.accessKey","druid.s3.secretKey","druid.metadata.storage.connector.password", "password", "key", "token", "pwd"]
+druid.sql.enable=true
+druid.sql.planner.useGroupingSetForExactDistinct=true
+druid.startup.logging.logProperties=true
+druid.storage.baseKey=druid/segments
+druid.storage.bucket=druid
+druid.storage.storageDirectory=var/druid/segments
+druid.storage.type=s3
+druid.zk.paths.base=/druid
+druid.zk.service.host=druid-cluster-zk.demo.svc:2181
+druid.zk.service.pwd={"type": "environment", "variable": "DRUID_ZK_SERVICE_PASSWORD"}
+druid.zk.service.user=super
+```
+
+We can verify from the above output that TLS is disabled for this cluster as there is no TLS/SSL related configs provided for it.
+
+#### Verify TLS/SSL is disabled using Druid UI
+
+First port-forward the port `8888` to local machine:
+
+```bash
+$ kubectl port-forward -n demo svc/druid-cluster-routers 8888
+Forwarding from 127.0.0.1:8888 -> 8888
+Forwarding from [::1]:8888 -> 8888
+```
+
+
+Now hit the `http://localhost:8888` from any browser, and you will be prompted to provide the credential of the druid database. By following the steps discussed below, you can get the credential generated by the KubeDB operator for your Druid database.
+
+**Connection information:**
+
+- Username:
+
+ ```bash
+ $ kubectl get secret -n demo druid-cluster-admin-cred -o jsonpath='{.data.username}' | base64 -d
+ admin
+ ```
+
+- Password:
+
+ ```bash
+ $ kubectl get secret -n demo druid-cluster-admin-cred -o jsonpath='{.data.password}' | base64 -d
+ LzJtVRX5E8MorFaf
+ ```
+
+After providing the credentials correctly, you should be able to access the web console like shown below.
+
+
+
+
+
+From the above screenshot, we can see that the connection is not secure now. In other words, TLS/SSL is disabled for this druid cluster.
+
+### Create Issuer/ ClusterIssuer
+
+Now, We are going to create an example `Issuer` that will be used to enable SSL/TLS in Druid. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`.
+
+- Start off by generating a ca certificates using openssl.
+
+```bash
+$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=ca/O=kubedb"
+Generating a RSA private key
+................+++++
+........................+++++
+writing new private key to './ca.key'
+-----
+```
+
+- Now we are going to create a ca-secret using the certificate files that we have just generated.
+
+```bash
+$ kubectl create secret tls druid-ca \
+ --cert=ca.crt \
+ --key=ca.key \
+ --namespace=demo
+secret/druid-ca created
+```
+
+Now, Let's create an `Issuer` using the `druid-ca` secret that we have just created. The `YAML` file looks like this:
+
+```yaml
+apiVersion: cert-manager.io/v1
+kind: Issuer
+metadata:
+ name: druid-ca-issuer
+ namespace: demo
+spec:
+ ca:
+ secretName: druid-ca
+```
+
+Let's apply the `YAML` file:
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/reconfigure-tls/yamls/druid-issuer.yaml
+issuer.cert-manager.io/druid-ca-issuer created
+```
+
+### Create DruidOpsRequest
+
+In order to add TLS to the druid, we have to create a `DruidOpsRequest` CRO with our created issuer. Below is the YAML of the `DruidOpsRequest` CRO that we are going to create,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: drops-add-tls
+ namespace: demo
+spec:
+ type: ReconfigureTLS
+ databaseRef:
+ name: druid-cluster
+ tls:
+ issuerRef:
+ name: druid-ca-issuer
+ kind: Issuer
+ apiGroup: "cert-manager.io"
+ certificates:
+ - alias: client
+ subject:
+ organizations:
+ - druid
+ organizationalUnits:
+ - client
+ timeout: 5m
+ apply: IfReady
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `druid-cluster` cluster.
+- `spec.type` specifies that we are performing `ReconfigureTLS` on druid.
+- `spec.tls.issuerRef` specifies the issuer name, kind and api group.
+- `spec.tls.certificates` specifies the certificates. You can learn more about this field from [here](/docs/guides/druid/concepts/druid.md#spectls).
+
+Let's create the `DruidOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/reconfigure-tls/yamls/drops-add-tls.yaml
+druidopsrequest.ops.kubedb.com/drops-add-tls created
+```
+
+#### Verify TLS Enabled Successfully
+
+Let's wait for `DruidOpsRequest` to be `Successful`. Run the following command to watch `DruidOpsRequest` CRO,
+
+```bash
+$ kubectl get drops -n demo -w
+NAME TYPE STATUS AGE
+drops-add-tls ReconfigureTLS Progressing 39s
+drops-add-tls ReconfigureTLS Progressing 44s
+...
+...
+drops-add-tls ReconfigureTLS Successful 79s
+```
+
+We can see from the above output that the `DruidOpsRequest` has succeeded. If we describe the `DruidOpsRequest` we will get an overview of the steps that were followed.
+
+```bash
+$ kubectl describe druidopsrequest -n demo drops-add-tls
+Name: drops-add-tls
+Namespace: demo
+Labels:
+Annotations:
+API Version: ops.kubedb.com/v1alpha1
+Kind: DruidOpsRequest
+Metadata:
+ Creation Timestamp: 2024-10-28T09:43:13Z
+ Generation: 1
+ Managed Fields:
+ API Version: ops.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:metadata:
+ f:annotations:
+ .:
+ f:kubectl.kubernetes.io/last-applied-configuration:
+ f:spec:
+ .:
+ f:apply:
+ f:databaseRef:
+ f:timeout:
+ f:tls:
+ .:
+ f:certificates:
+ f:issuerRef:
+ f:type:
+ Manager: kubectl-client-side-apply
+ Operation: Update
+ Time: 2024-10-28T09:43:13Z
+ API Version: ops.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:status:
+ .:
+ f:conditions:
+ f:observedGeneration:
+ f:phase:
+ Manager: kubedb-ops-manager
+ Operation: Update
+ Subresource: status
+ Time: 2024-10-28T09:44:32Z
+ Resource Version: 409889
+ UID: b7f563c4-4773-49e9-aba2-17497e66f5f8
+Spec:
+ Apply: IfReady
+ Database Ref:
+ Name: druid-cluster
+ Timeout: 5m
+ Tls:
+ Certificates:
+ Alias: client
+ Subject:
+ Organizational Units:
+ client
+ Organizations:
+ druid
+ Issuer Ref:
+ API Group: cert-manager.io
+ Kind: Issuer
+ Name: druid-ca-issuer
+ Type: ReconfigureTLS
+Status:
+ Conditions:
+ Last Transition Time: 2024-10-28T09:43:13Z
+ Message: Druid ops-request has started to reconfigure tls for druid nodes
+ Observed Generation: 1
+ Reason: ReconfigureTLS
+ Status: True
+ Type: ReconfigureTLS
+ Last Transition Time: 2024-10-28T09:43:26Z
+ Message: Successfully synced all certificates
+ Observed Generation: 1
+ Reason: CertificateSynced
+ Status: True
+ Type: CertificateSynced
+ Last Transition Time: 2024-10-28T09:43:21Z
+ Message: get certificate; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: GetCertificate
+ Last Transition Time: 2024-10-28T09:43:21Z
+ Message: check ready condition; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: CheckReadyCondition
+ Last Transition Time: 2024-10-28T09:43:21Z
+ Message: issuing condition; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: IssuingCondition
+ Last Transition Time: 2024-10-28T09:43:31Z
+ Message: successfully reconciled the Druid with tls configuration
+ Observed Generation: 1
+ Reason: UpdatePetSets
+ Status: True
+ Type: UpdatePetSets
+ Last Transition Time: 2024-10-28T09:44:32Z
+ Message: Successfully restarted all nodes
+ Observed Generation: 1
+ Reason: RestartNodes
+ Status: True
+ Type: RestartNodes
+ Last Transition Time: 2024-10-28T09:43:37Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-historicals-0
+ Last Transition Time: 2024-10-28T09:43:37Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-historicals-0
+ Last Transition Time: 2024-10-28T09:43:47Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-historicals-0
+ Last Transition Time: 2024-10-28T09:43:52Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-middlemanagers-0
+ Last Transition Time: 2024-10-28T09:43:52Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-middlemanagers-0
+ Last Transition Time: 2024-10-28T09:43:57Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-middlemanagers-0
+ Last Transition Time: 2024-10-28T09:44:02Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-brokers-0
+ Last Transition Time: 2024-10-28T09:44:02Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-brokers-0
+ Last Transition Time: 2024-10-28T09:44:07Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-brokers-0
+ Last Transition Time: 2024-10-28T09:44:12Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-routers-0
+ Last Transition Time: 2024-10-28T09:44:12Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-routers-0
+ Last Transition Time: 2024-10-28T09:44:17Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-routers-0
+ Last Transition Time: 2024-10-28T09:44:22Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-coordinators-0
+ Last Transition Time: 2024-10-28T09:44:22Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-coordinators-0
+ Last Transition Time: 2024-10-28T09:44:27Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-coordinators-0
+ Last Transition Time: 2024-10-28T09:44:32Z
+ Message: Successfully completed reconfigureTLS for druid.
+ Observed Generation: 1
+ Reason: Successful
+ Status: True
+ Type: Successful
+ Observed Generation: 1
+ Phase: Successful
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Starting 103s KubeDB Ops-manager Operator Start processing for DruidOpsRequest: demo/drops-add-tls
+ Normal Starting 103s KubeDB Ops-manager Operator Pausing Druid databse: demo/druid-cluster
+ Normal Successful 103s KubeDB Ops-manager Operator Successfully paused Druid database: demo/druid-cluster for DruidOpsRequest: drops-add-tls
+ Warning get certificate; ConditionStatus:True 95s KubeDB Ops-manager Operator get certificate; ConditionStatus:True
+ Warning check ready condition; ConditionStatus:True 95s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True
+ Warning issuing condition; ConditionStatus:True 95s KubeDB Ops-manager Operator issuing condition; ConditionStatus:True
+ Warning get certificate; ConditionStatus:True 95s KubeDB Ops-manager Operator get certificate; ConditionStatus:True
+ Warning check ready condition; ConditionStatus:True 95s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True
+ Warning issuing condition; ConditionStatus:True 95s KubeDB Ops-manager Operator issuing condition; ConditionStatus:True
+ Normal CertificateSynced 95s KubeDB Ops-manager Operator Successfully synced all certificates
+ Warning get certificate; ConditionStatus:True 90s KubeDB Ops-manager Operator get certificate; ConditionStatus:True
+ Warning check ready condition; ConditionStatus:True 90s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True
+ Warning issuing condition; ConditionStatus:True 90s KubeDB Ops-manager Operator issuing condition; ConditionStatus:True
+ Warning get certificate; ConditionStatus:True 90s KubeDB Ops-manager Operator get certificate; ConditionStatus:True
+ Warning check ready condition; ConditionStatus:True 90s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True
+ Warning issuing condition; ConditionStatus:True 90s KubeDB Ops-manager Operator issuing condition; ConditionStatus:True
+ Normal CertificateSynced 90s KubeDB Ops-manager Operator Successfully synced all certificates
+ Normal UpdatePetSets 85s KubeDB Ops-manager Operator successfully reconciled the Druid with tls configuration
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-historicals-0 79s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-historicals-0 79s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Warning check pod running; ConditionStatus:False; PodName:druid-cluster-historicals-0 74s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:druid-cluster-historicals-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-historicals-0 69s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0 64s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0 64s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0 59s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-brokers-0 54s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-brokers-0 54s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-brokers-0 49s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-routers-0 44s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-routers-0 44s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-routers-0 39s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0 34s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0 34s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-coordinators-0 29s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Normal RestartNodes 24s KubeDB Ops-manager Operator Successfully restarted all nodes
+ Normal Starting 24s KubeDB Ops-manager Operator Resuming Druid database: demo/druid-cluster
+ Normal Successful 24s KubeDB Ops-manager Operator Successfully resumed Druid database: demo/druid-cluster for DruidOpsRequest: drops-add-tls
+```
+
+Now, Lets exec into a druid coordinators pod and verify the configuration that the TLS is enabled.
+
+```bash
+$ kubectl exec -it -n demo druid-cluster-coordinators-0 -- bash
+Defaulted container "druid" out of: druid, init-druid (init)
+bash-5.1$ cat conf/druid/cluster/_common/common.runtime.properties
+druid.auth.authenticator.basic.authorizerName=basic
+druid.auth.authenticator.basic.credentialsValidator.type=metadata
+druid.auth.authenticator.basic.initialAdminPassword={"type": "environment", "variable": "DRUID_ADMIN_PASSWORD"}
+druid.auth.authenticator.basic.initialInternalClientPassword=password2
+druid.auth.authenticator.basic.skipOnFailure=false
+druid.auth.authenticator.basic.type=basic
+druid.auth.authenticatorChain=["basic"]
+druid.auth.authorizer.basic.type=basic
+druid.auth.authorizers=["basic"]
+druid.client.https.trustStorePassword={"type": "environment", "variable": "DRUID_KEY_STORE_PASSWORD"}
+druid.client.https.trustStorePath=/opt/druid/ssl/truststore.jks
+druid.client.https.trustStoreType=jks
+druid.client.https.validateHostnames=false
+druid.emitter.logging.logLevel=info
+druid.emitter=noop
+druid.enablePlaintextPort=false
+druid.enableTlsPort=true
+druid.escalator.authorizerName=basic
+druid.escalator.internalClientPassword=password2
+druid.escalator.internalClientUsername=druid_system
+druid.escalator.type=basic
+druid.expressions.useStrictBooleans=true
+druid.extensions.loadList=["druid-avro-extensions", "druid-kafka-indexing-service", "druid-kafka-indexing-service", "druid-datasketches", "druid-multi-stage-query", "druid-basic-security", "simple-client-sslcontext", "mysql-metadata-storage", "druid-s3-extensions"]
+druid.global.http.eagerInitialization=false
+druid.host=localhost
+druid.indexer.logs.directory=var/druid/indexing-logs
+druid.indexer.logs.type=file
+druid.indexing.doubleStorage=double
+druid.lookup.enableLookupSyncOnStartup=false
+druid.metadata.storage.connector.connectURI=jdbc:mysql://druid-cluster-mysql-metadata.demo.svc:3306/druid
+druid.metadata.storage.connector.createTables=true
+druid.metadata.storage.connector.host=localhost
+druid.metadata.storage.connector.password={"type": "environment", "variable": "DRUID_METADATA_STORAGE_PASSWORD"}
+druid.metadata.storage.connector.port=1527
+druid.metadata.storage.connector.user=root
+druid.metadata.storage.type=mysql
+druid.monitoring.monitors=["org.apache.druid.java.util.metrics.JvmMonitor", "org.apache.druid.server.metrics.ServiceStatusMonitor"]
+druid.s3.accessKey=minio
+druid.s3.enablePathStyleAccess=true
+druid.s3.endpoint.signingRegion=us-east-1
+druid.s3.endpoint.url=http://myminio-hl.demo.svc.cluster.local:9000/
+druid.s3.protocol=http
+druid.s3.secretKey=minio123
+druid.selectors.coordinator.serviceName=druid/coordinator
+druid.selectors.indexing.serviceName=druid/overlord
+druid.server.hiddenProperties=["druid.s3.accessKey","druid.s3.secretKey","druid.metadata.storage.connector.password", "password", "key", "token", "pwd"]
+druid.server.https.certAlias=druid
+druid.server.https.keyStorePassword={"type": "environment", "variable": "DRUID_KEY_STORE_PASSWORD"}
+druid.server.https.keyStorePath=/opt/druid/ssl/keystore.jks
+druid.server.https.keyStoreType=jks
+druid.sql.enable=true
+druid.sql.planner.useGroupingSetForExactDistinct=true
+druid.startup.logging.logProperties=true
+druid.storage.baseKey=druid/segments
+druid.storage.bucket=druid
+druid.storage.storageDirectory=var/druid/segments
+druid.storage.type=s3
+druid.zk.paths.base=/druid
+druid.zk.service.host=druid-cluster-zk.demo.svc:2181
+druid.zk.service.pwd={"type": "environment", "variable": "DRUID_ZK_SERVICE_PASSWORD"}
+druid.zk.service.user=super
+
+```
+
+We can see from the output above that all TLS related configs are added in the configuration file of the druid database.
+
+#### Verify TLS/SSL using Druid UI
+
+To check follow the following steps:
+
+Druid uses separate ports for TLS/SSL. While the plaintext port for `routers` node is `8888`. For TLS, it is `9088`. Hence, we will use that port to access the UI.
+
+First port-forward the port `9088` to local machine:
+
+```bash
+$ kubectl port-forward -n demo svc/druid-cluster-tls-routers 9088
+Forwarding from 127.0.0.1:9088 -> 9088
+Forwarding from [::1]:9088 -> 9088
+```
+
+
+Now hit the `https://localhost:9088/` from any browser. Here you may select `Advance` and then `Proceed to localhost (unsafe)` or you can add the `ca.crt` from the secret `druid-cluster-tls-client-cert` to your browser's Authorities.
+
+After that you will be prompted to provide the credential of the druid database. By following the steps discussed below, you can get the credential generated by the KubeDB operator for your Druid database.
+
+**Connection information:**
+
+- Username:
+
+ ```bash
+ $ kubectl get secret -n demo druid-cluster-tls-admin-cred -o jsonpath='{.data.username}' | base64 -d
+ admin
+ ```
+
+- Password:
+
+ ```bash
+ $ kubectl get secret -n demo druid-cluster-tls-admin-cred -o jsonpath='{.data.password}' | base64 -d
+ LzJtVRX5E8MorFaf
+ ```
+
+After providing the credentials correctly, you should be able to access the web console like shown below.
+
+
+
+
+
+From the above screenshot, we can see that the connection is secure.
+
+
+## Rotate Certificate
+
+Now we are going to rotate the certificate of this cluster. First let's check the current expiration date of the certificate.
+
+```bash
+$ kubectl port-forward -n demo svc/druid-cluster-routers 9088
+Forwarding from 127.0.0.1:9088 -> 9088
+Forwarding from [::1]:9088 -> 9088
+Handling connection for 9088
+...
+
+$ openssl s_client -connect localhost:9088 2>/dev/null | openssl x509 -noout -enddate
+notAfter=Jan 26 09:43:16 2025 GMT
+```
+
+So, the certificate will expire on this time `Jan 26 09:43:16 2025 GMT`.
+
+### Create DruidOpsRequest
+
+Now we are going to increase it using a DruidOpsRequest. Below is the yaml of the ops request that we are going to create,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: druid-recon-tls-rotate
+ namespace: demo
+spec:
+ type: ReconfigureTLS
+ databaseRef:
+ name: druid-cluster
+ tls:
+ rotateCertificates: true
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `druid-cluster`.
+- `spec.type` specifies that we are performing `ReconfigureTLS` on our cluster.
+- `spec.tls.rotateCertificates` specifies that we want to rotate the certificate of this druid cluster.
+
+Let's create the `DruidOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/reconfigure-tls/yamls/drops-rotate.yaml
+druidopsrequest.ops.kubedb.com/drops-rotate created
+```
+
+#### Verify Certificate Rotated Successfully
+
+Let's wait for `DruidOpsRequest` to be `Successful`. Run the following command to watch `DruidOpsRequest` CRO,
+
+```bash
+$ kubectl get druidopsrequests -n demo drops-rotate -w
+NAME TYPE STATUS AGE
+drops-rotate ReconfigureTLS Successful 4m4s
+```
+
+We can see from the above output that the `DruidOpsRequest` has succeeded. If we describe the `DruidOpsRequest` we will get an overview of the steps that were followed.
+
+```bash
+$ kubectl describe druidopsrequest -n demo drops-rotate
+Name: drops-rotate
+Namespace: demo
+Labels:
+Annotations:
+API Version: ops.kubedb.com/v1alpha1
+Kind: DruidOpsRequest
+Metadata:
+ Creation Timestamp: 2024-10-28T14:14:50Z
+ Generation: 1
+ Managed Fields:
+ API Version: ops.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:metadata:
+ f:annotations:
+ .:
+ f:kubectl.kubernetes.io/last-applied-configuration:
+ f:spec:
+ .:
+ f:apply:
+ f:databaseRef:
+ f:tls:
+ .:
+ f:rotateCertificates:
+ f:type:
+ Manager: kubectl-client-side-apply
+ Operation: Update
+ Time: 2024-10-28T14:14:50Z
+ API Version: ops.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:status:
+ .:
+ f:conditions:
+ f:observedGeneration:
+ f:phase:
+ Manager: kubedb-ops-manager
+ Operation: Update
+ Subresource: status
+ Time: 2024-10-28T14:16:04Z
+ Resource Version: 440897
+ UID: ca3532fc-6e11-4962-bddb-f9cf946d3954
+Spec:
+ Apply: IfReady
+ Database Ref:
+ Name: druid-cluster
+ Tls:
+ Rotate Certificates: true
+ Type: ReconfigureTLS
+Status:
+ Conditions:
+ Last Transition Time: 2024-10-28T14:14:50Z
+ Message: Druid ops-request has started to reconfigure tls for druid nodes
+ Observed Generation: 1
+ Reason: ReconfigureTLS
+ Status: True
+ Type: ReconfigureTLS
+ Last Transition Time: 2024-10-28T14:15:04Z
+ Message: Successfully synced all certificates
+ Observed Generation: 1
+ Reason: CertificateSynced
+ Status: True
+ Type: CertificateSynced
+ Last Transition Time: 2024-10-28T14:14:58Z
+ Message: get certificate; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: GetCertificate
+ Last Transition Time: 2024-10-28T14:14:58Z
+ Message: check ready condition; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: CheckReadyCondition
+ Last Transition Time: 2024-10-28T14:14:58Z
+ Message: issuing condition; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: IssuingCondition
+ Last Transition Time: 2024-10-28T14:15:09Z
+ Message: successfully reconciled the Druid with tls configuration
+ Observed Generation: 1
+ Reason: UpdatePetSets
+ Status: True
+ Type: UpdatePetSets
+ Last Transition Time: 2024-10-28T14:16:04Z
+ Message: Successfully restarted all nodes
+ Observed Generation: 1
+ Reason: RestartNodes
+ Status: True
+ Type: RestartNodes
+ Last Transition Time: 2024-10-28T14:15:14Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-historicals-0
+ Last Transition Time: 2024-10-28T14:15:14Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-historicals-0
+ Last Transition Time: 2024-10-28T14:15:19Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-historicals-0
+ Last Transition Time: 2024-10-28T14:15:24Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-middlemanagers-0
+ Last Transition Time: 2024-10-28T14:15:24Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-middlemanagers-0
+ Last Transition Time: 2024-10-28T14:15:29Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-middlemanagers-0
+ Last Transition Time: 2024-10-28T14:15:34Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-brokers-0
+ Last Transition Time: 2024-10-28T14:15:34Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-brokers-0
+ Last Transition Time: 2024-10-28T14:15:39Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-brokers-0
+ Last Transition Time: 2024-10-28T14:15:44Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-routers-0
+ Last Transition Time: 2024-10-28T14:15:44Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-routers-0
+ Last Transition Time: 2024-10-28T14:15:49Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-routers-0
+ Last Transition Time: 2024-10-28T14:15:54Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-coordinators-0
+ Last Transition Time: 2024-10-28T14:15:54Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-coordinators-0
+ Last Transition Time: 2024-10-28T14:15:59Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-coordinators-0
+ Last Transition Time: 2024-10-28T14:16:04Z
+ Message: Successfully completed reconfigureTLS for druid.
+ Observed Generation: 1
+ Reason: Successful
+ Status: True
+ Type: Successful
+ Observed Generation: 1
+ Phase: Successful
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Starting 101s KubeDB Ops-manager Operator Start processing for DruidOpsRequest: demo/drops-rotate
+ Normal Starting 101s KubeDB Ops-manager Operator Pausing Druid databse: demo/druid-cluster
+ Normal Successful 101s KubeDB Ops-manager Operator Successfully paused Druid database: demo/druid-cluster for DruidOpsRequest: drops-rotate
+ Warning get certificate; ConditionStatus:True 93s KubeDB Ops-manager Operator get certificate; ConditionStatus:True
+ Warning check ready condition; ConditionStatus:True 93s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True
+ Warning issuing condition; ConditionStatus:True 93s KubeDB Ops-manager Operator issuing condition; ConditionStatus:True
+ Warning get certificate; ConditionStatus:True 93s KubeDB Ops-manager Operator get certificate; ConditionStatus:True
+ Warning check ready condition; ConditionStatus:True 93s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True
+ Warning issuing condition; ConditionStatus:True 93s KubeDB Ops-manager Operator issuing condition; ConditionStatus:True
+ Normal CertificateSynced 93s KubeDB Ops-manager Operator Successfully synced all certificates
+ Warning get certificate; ConditionStatus:True 88s KubeDB Ops-manager Operator get certificate; ConditionStatus:True
+ Warning check ready condition; ConditionStatus:True 88s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True
+ Warning issuing condition; ConditionStatus:True 88s KubeDB Ops-manager Operator issuing condition; ConditionStatus:True
+ Warning get certificate; ConditionStatus:True 88s KubeDB Ops-manager Operator get certificate; ConditionStatus:True
+ Warning check ready condition; ConditionStatus:True 88s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True
+ Warning issuing condition; ConditionStatus:True 88s KubeDB Ops-manager Operator issuing condition; ConditionStatus:True
+ Normal CertificateSynced 87s KubeDB Ops-manager Operator Successfully synced all certificates
+ Normal UpdatePetSets 82s KubeDB Ops-manager Operator successfully reconciled the Druid with tls configuration
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-historicals-0 77s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-historicals-0 77s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-historicals-0 72s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0 67s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0 67s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0 62s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-brokers-0 57s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-brokers-0 57s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-brokers-0 52s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-routers-0 47s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-routers-0 47s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-routers-0 42s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0 37s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0 37s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-coordinators-0 32s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Normal RestartNodes 27s KubeDB Ops-manager Operator Successfully restarted all nodes
+ Normal Starting 27s KubeDB Ops-manager Operator Resuming Druid database: demo/druid-cluster
+ Normal Successful 27s KubeDB Ops-manager Operator Successfully resumed Druid database: demo/druid-cluster for DruidOpsRequest: drops-rotate
+```
+
+Now, let's check the expiration date of the certificate.
+
+```bash
+$ kubectl port-forward -n demo svc/druid-cluster-routers 9088
+Forwarding from 127.0.0.1:9088 -> 9088
+Forwarding from [::1]:9088 -> 9088
+Handling connection for 9088
+...
+
+$ openssl s_client -connect localhost:9088 2>/dev/null | openssl x509 -noout -enddate
+notAfter=Jan 26 14:15:46 2025 GMT
+```
+
+As we can see from the above output, the certificate has been rotated successfully.
+
+## Change Issuer/ClusterIssuer
+
+Now, we are going to change the issuer of this database.
+
+- Let's create a new ca certificate and key using a different subject `CN=ca-update,O=kubedb-updated`.
+
+```bash
+$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=ca-updated/O=kubedb-updated"
+Generating a RSA private key
+..............................................................+++++
+......................................................................................+++++
+writing new private key to './ca.key'
+-----
+```
+
+- Now we are going to create a new ca-secret using the certificate files that we have just generated.
+
+```bash
+$ kubectl create secret tls druid-new-ca \
+ --cert=ca.crt \
+ --key=ca.key \
+ --namespace=demo
+secret/druid-new-ca created
+```
+
+Now, Let's create a new `Issuer` using the `mongo-new-ca` secret that we have just created. The `YAML` file looks like this:
+
+```yaml
+apiVersion: cert-manager.io/v1
+kind: Issuer
+metadata:
+ name: dr-new-issuer
+ namespace: demo
+spec:
+ ca:
+ secretName: druid-new-ca
+```
+
+Let's apply the `YAML` file:
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/reconfigure-tls/yamls/druid-new-issuer.yaml
+issuer.cert-manager.io/dr-new-issuer created
+```
+
+### Create DruidOpsRequest
+
+In order to use the new issuer to issue new certificates, we have to create a `DruidOpsRequest` CRO with the newly created issuer. Below is the YAML of the `DruidOpsRequest` CRO that we are going to create,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: drops-update-issuer
+ namespace: demo
+spec:
+ type: ReconfigureTLS
+ databaseRef:
+ name: druid-cluster
+ tls:
+ issuerRef:
+ name: dr-new-issuer
+ kind: Issuer
+ apiGroup: "cert-manager.io"
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `druid-cluster` cluster.
+- `spec.type` specifies that we are performing `ReconfigureTLS` on our druid.
+- `spec.tls.issuerRef` specifies the issuer name, kind and api group.
+
+Let's create the `DruidOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/reconfigure-tls/yamls/druid-update-tls-issuer.yaml
+druidpsrequest.ops.kubedb.com/drops-update-issuer created
+```
+
+#### Verify Issuer is changed successfully
+
+Let's wait for `DruidOpsRequest` to be `Successful`. Run the following command to watch `DruidOpsRequest` CRO,
+
+```bash
+$ kubectl get druidopsrequests -n demo drops-update-issuer -w
+NAME TYPE STATUS AGE
+drops-update-issuer ReconfigureTLS Progressing 14s
+drops-update-issuer ReconfigureTLS Progressing 18s
+...
+...
+drops-update-issuer ReconfigureTLS Successful 73s
+```
+
+We can see from the above output that the `DruidOpsRequest` has succeeded. If we describe the `DruidOpsRequest` we will get an overview of the steps that were followed.
+
+```bash
+$ kubectl describe druidopsrequest -n demo drops-update-issuer
+Name: drops-update-issuer
+Namespace: demo
+Labels:
+Annotations:
+API Version: ops.kubedb.com/v1alpha1
+Kind: DruidOpsRequest
+Metadata:
+ Creation Timestamp: 2024-10-28T14:24:22Z
+ Generation: 1
+ Managed Fields:
+ API Version: ops.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:metadata:
+ f:annotations:
+ .:
+ f:kubectl.kubernetes.io/last-applied-configuration:
+ f:spec:
+ .:
+ f:apply:
+ f:databaseRef:
+ f:tls:
+ .:
+ f:issuerRef:
+ f:type:
+ Manager: kubectl-client-side-apply
+ Operation: Update
+ Time: 2024-10-28T14:24:22Z
+ API Version: ops.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:status:
+ .:
+ f:conditions:
+ f:observedGeneration:
+ f:phase:
+ Manager: kubedb-ops-manager
+ Operation: Update
+ Subresource: status
+ Time: 2024-10-28T14:25:35Z
+ Resource Version: 442332
+ UID: 5089e358-2dc2-4d62-8c13-92828de7c557
+Spec:
+ Apply: IfReady
+ Database Ref:
+ Name: druid-cluster
+ Tls:
+ Issuer Ref:
+ API Group: cert-manager.io
+ Kind: Issuer
+ Name: dr-new-issuer
+ Type: ReconfigureTLS
+Status:
+ Conditions:
+ Last Transition Time: 2024-10-28T14:24:22Z
+ Message: Druid ops-request has started to reconfigure tls for druid nodes
+ Observed Generation: 1
+ Reason: ReconfigureTLS
+ Status: True
+ Type: ReconfigureTLS
+ Last Transition Time: 2024-10-28T14:24:35Z
+ Message: Successfully synced all certificates
+ Observed Generation: 1
+ Reason: CertificateSynced
+ Status: True
+ Type: CertificateSynced
+ Last Transition Time: 2024-10-28T14:24:30Z
+ Message: get certificate; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: GetCertificate
+ Last Transition Time: 2024-10-28T14:24:30Z
+ Message: check ready condition; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: CheckReadyCondition
+ Last Transition Time: 2024-10-28T14:24:30Z
+ Message: issuing condition; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: IssuingCondition
+ Last Transition Time: 2024-10-28T14:24:40Z
+ Message: successfully reconciled the Druid with tls configuration
+ Observed Generation: 1
+ Reason: UpdatePetSets
+ Status: True
+ Type: UpdatePetSets
+ Last Transition Time: 2024-10-28T14:25:35Z
+ Message: Successfully restarted all nodes
+ Observed Generation: 1
+ Reason: RestartNodes
+ Status: True
+ Type: RestartNodes
+ Last Transition Time: 2024-10-28T14:24:45Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-historicals-0
+ Last Transition Time: 2024-10-28T14:24:45Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-historicals-0
+ Last Transition Time: 2024-10-28T14:24:50Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-historicals-0
+ Last Transition Time: 2024-10-28T14:24:55Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-middlemanagers-0
+ Last Transition Time: 2024-10-28T14:24:55Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-middlemanagers-0
+ Last Transition Time: 2024-10-28T14:25:00Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-middlemanagers-0
+ Last Transition Time: 2024-10-28T14:25:05Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-brokers-0
+ Last Transition Time: 2024-10-28T14:25:05Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-brokers-0
+ Last Transition Time: 2024-10-28T14:25:10Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-brokers-0
+ Last Transition Time: 2024-10-28T14:25:15Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-routers-0
+ Last Transition Time: 2024-10-28T14:25:15Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-routers-0
+ Last Transition Time: 2024-10-28T14:25:20Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-routers-0
+ Last Transition Time: 2024-10-28T14:25:25Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-coordinators-0
+ Last Transition Time: 2024-10-28T14:25:25Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-coordinators-0
+ Last Transition Time: 2024-10-28T14:25:30Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-coordinators-0
+ Last Transition Time: 2024-10-28T14:25:35Z
+ Message: Successfully completed reconfigureTLS for druid.
+ Observed Generation: 1
+ Reason: Successful
+ Status: True
+ Type: Successful
+ Observed Generation: 1
+ Phase: Successful
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Starting 92s KubeDB Ops-manager Operator Start processing for DruidOpsRequest: demo/drops-update-issuer
+ Normal Starting 92s KubeDB Ops-manager Operator Pausing Druid databse: demo/druid-cluster
+ Normal Successful 92s KubeDB Ops-manager Operator Successfully paused Druid database: demo/druid-cluster for DruidOpsRequest: drops-update-issuer
+ Warning get certificate; ConditionStatus:True 84s KubeDB Ops-manager Operator get certificate; ConditionStatus:True
+ Warning check ready condition; ConditionStatus:True 84s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True
+ Warning issuing condition; ConditionStatus:True 84s KubeDB Ops-manager Operator issuing condition; ConditionStatus:True
+ Warning get certificate; ConditionStatus:True 84s KubeDB Ops-manager Operator get certificate; ConditionStatus:True
+ Warning check ready condition; ConditionStatus:True 84s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True
+ Warning issuing condition; ConditionStatus:True 84s KubeDB Ops-manager Operator issuing condition; ConditionStatus:True
+ Normal CertificateSynced 84s KubeDB Ops-manager Operator Successfully synced all certificates
+ Warning get certificate; ConditionStatus:True 79s KubeDB Ops-manager Operator get certificate; ConditionStatus:True
+ Warning check ready condition; ConditionStatus:True 79s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True
+ Warning issuing condition; ConditionStatus:True 79s KubeDB Ops-manager Operator issuing condition; ConditionStatus:True
+ Warning get certificate; ConditionStatus:True 79s KubeDB Ops-manager Operator get certificate; ConditionStatus:True
+ Warning check ready condition; ConditionStatus:True 79s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True
+ Warning issuing condition; ConditionStatus:True 79s KubeDB Ops-manager Operator issuing condition; ConditionStatus:True
+ Normal CertificateSynced 79s KubeDB Ops-manager Operator Successfully synced all certificates
+ Normal UpdatePetSets 74s KubeDB Ops-manager Operator successfully reconciled the Druid with tls configuration
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-historicals-0 69s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-historicals-0 69s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-historicals-0 64s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0 59s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0 59s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0 54s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-brokers-0 49s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-brokers-0 49s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-brokers-0 44s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-routers-0 39s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-routers-0 39s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-routers-0 34s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0 29s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0 29s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-coordinators-0 24s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Normal RestartNodes 19s KubeDB Ops-manager Operator Successfully restarted all nodes
+ Normal Starting 19s KubeDB Ops-manager Operator Resuming Druid database: demo/druid-cluster
+ Normal Successful 19s KubeDB Ops-manager Operator Successfully resumed Druid database: demo/druid-cluster for DruidOpsRequest: drops-update-issuer
+```
+
+Now, Lets exec into a druid node and find out the ca subject to see if it matches the one we have provided.
+
+```bash
+$ kubectl exec -it druid-cluster-broker-0 -- bash
+druid@druid-cluster-broker-0:~$ keytool -list -v -keystore /var/private/ssl/server.keystore.jks -storepass wt6f5pwxpg84 | grep 'Issuer'
+Issuer: O=kubedb-updated, CN=ca-updated
+Issuer: O=kubedb-updated, CN=ca-updated
+
+$ kubectl port-forward -n demo svc/druid-cluster-routers 9088
+Forwarding from 127.0.0.1:9088 -> 9088
+Forwarding from [::1]:9088 -> 9088
+Handling connection for 9088
+...
+
+$ openssl s_client -connect localhost:9088 2>/dev/null | openssl x509 -noout -issuer
+issuer=CN = ca-updated, O = kubedb-updated
+```
+
+We can see from the above output that, the subject name matches the subject name of the new ca certificate that we have created. So, the issuer is changed successfully.
+
+## Remove TLS from the Database
+
+Now, we are going to remove TLS from this database using a DruidOpsRequest.
+
+### Create DruidOpsRequest
+
+Below is the YAML of the `DruidOpsRequest` CRO that we are going to create,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: drops-remove
+ namespace: demo
+spec:
+ type: ReconfigureTLS
+ databaseRef:
+ name: druid-cluster
+ tls:
+ remove: true
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `druid-cluster` cluster.
+- `spec.type` specifies that we are performing `ReconfigureTLS` on Druid.
+- `spec.tls.remove` specifies that we want to remove tls from this cluster.
+
+Let's create the `DruidOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/reconfigure-tls/yamls/drops-remove.yaml
+druidopsrequest.ops.kubedb.com/drops-remove created
+```
+
+#### Verify TLS Removed Successfully
+
+Let's wait for `DruidOpsRequest` to be `Successful`. Run the following command to watch `DruidOpsRequest` CRO,
+
+```bash
+$ kubectl get druidopsrequest -n demo drops-remove -w
+NAME TYPE STATUS AGE
+drops-remove ReconfigureTLS Progressing 25s
+drops-remove ReconfigureTLS Progressing 29s
+...
+...
+drops-remove ReconfigureTLS Successful 114s
+
+```
+
+We can see from the above output that the `DruidOpsRequest` has succeeded. If we describe the `DruidOpsRequest` we will get an overview of the steps that were followed.
+
+```bash
+$ kubectl describe druidopsrequest -n demo drops-remove
+Name: drops-remove
+Namespace: demo
+Labels:
+Annotations:
+API Version: ops.kubedb.com/v1alpha1
+Kind: DruidOpsRequest
+Metadata:
+ Creation Timestamp: 2024-10-28T14:31:07Z
+ Generation: 1
+ Managed Fields:
+ API Version: ops.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:metadata:
+ f:annotations:
+ .:
+ f:kubectl.kubernetes.io/last-applied-configuration:
+ f:spec:
+ .:
+ f:apply:
+ f:databaseRef:
+ f:tls:
+ .:
+ f:remove:
+ f:type:
+ Manager: kubectl-client-side-apply
+ Operation: Update
+ Time: 2024-10-28T14:31:07Z
+ API Version: ops.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:status:
+ .:
+ f:conditions:
+ f:observedGeneration:
+ f:phase:
+ Manager: kubedb-ops-manager
+ Operation: Update
+ Subresource: status
+ Time: 2024-10-28T14:33:01Z
+ Resource Version: 443725
+ UID: 27234241-c72e-471c-8dd4-16fd485956cc
+Spec:
+ Apply: IfReady
+ Database Ref:
+ Name: druid-cluster
+ Tls:
+ Remove: true
+ Type: ReconfigureTLS
+Status:
+ Conditions:
+ Last Transition Time: 2024-10-28T14:31:07Z
+ Message: Druid ops-request has started to reconfigure tls for druid nodes
+ Observed Generation: 1
+ Reason: ReconfigureTLS
+ Status: True
+ Type: ReconfigureTLS
+ Last Transition Time: 2024-10-28T14:31:16Z
+ Message: successfully reconciled the Druid with tls configuration
+ Observed Generation: 1
+ Reason: UpdatePetSets
+ Status: True
+ Type: UpdatePetSets
+ Last Transition Time: 2024-10-28T14:33:01Z
+ Message: Successfully restarted all nodes
+ Observed Generation: 1
+ Reason: RestartNodes
+ Status: True
+ Type: RestartNodes
+ Last Transition Time: 2024-10-28T14:31:21Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-historicals-0
+ Last Transition Time: 2024-10-28T14:31:21Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-historicals-0
+ Last Transition Time: 2024-10-28T14:31:26Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-historicals-0
+ Last Transition Time: 2024-10-28T14:31:31Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-middlemanagers-0
+ Last Transition Time: 2024-10-28T14:31:31Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-middlemanagers-0
+ Last Transition Time: 2024-10-28T14:31:36Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-middlemanagers-0
+ Last Transition Time: 2024-10-28T14:31:41Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-brokers-0
+ Last Transition Time: 2024-10-28T14:31:41Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-brokers-0
+ Last Transition Time: 2024-10-28T14:31:46Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-brokers-0
+ Last Transition Time: 2024-10-28T14:31:51Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-routers-0
+ Last Transition Time: 2024-10-28T14:31:51Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-routers-0
+ Last Transition Time: 2024-10-28T14:31:56Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-routers-0
+ Last Transition Time: 2024-10-28T14:32:01Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-coordinators-0
+ Last Transition Time: 2024-10-28T14:32:01Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-coordinators-0
+ Last Transition Time: 2024-10-28T14:32:06Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-coordinators-0
+ Last Transition Time: 2024-10-28T14:33:01Z
+ Message: Successfully completed reconfigureTLS for druid.
+ Observed Generation: 1
+ Reason: Successful
+ Status: True
+ Type: Successful
+ Observed Generation: 1
+ Phase: Successful
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Starting 2m12s KubeDB Ops-manager Operator Start processing for DruidOpsRequest: demo/drops-remove
+ Normal Starting 2m12s KubeDB Ops-manager Operator Pausing Druid databse: demo/druid-cluster
+ Normal Successful 2m12s KubeDB Ops-manager Operator Successfully paused Druid database: demo/druid-cluster for DruidOpsRequest: drops-remove
+ Normal UpdatePetSets 2m3s KubeDB Ops-manager Operator successfully reconciled the Druid with tls configuration
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-historicals-0 118s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-historicals-0 118s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-historicals-0 113s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0 108s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0 108s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0 103s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-brokers-0 98s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-brokers-0 98s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-brokers-0 93s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-routers-0 88s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-routers-0 88s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-routers-0 83s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0 78s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0 78s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-coordinators-0 73s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-historicals-0 68s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-historicals-0 68s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-historicals-0 63s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0 58s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0 58s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0 53s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-brokers-0 48s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-brokers-0 48s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-brokers-0 43s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-routers-0 38s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-routers-0 38s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-routers-0 33s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0 28s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0 28s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-coordinators-0 23s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Normal RestartNodes 18s KubeDB Ops-manager Operator Successfully restarted all nodes
+ Normal Starting 18s KubeDB Ops-manager Operator Resuming Druid database: demo/druid-cluster
+ Normal Successful 18s KubeDB Ops-manager Operator Successfully resumed Druid database: demo/druid-cluster for DruidOpsRequest: drops-remove
+```
+
+Now, Let's exec into one of the broker node and find out that TLS is disabled or not.
+
+```bash
+$$ kubectl exec -it -n demo druid-cluster-broker-0 -- druid-configs.sh --bootstrap-server localhost:9092 --command-config /opt/druid/config/clientauth.properties --describe --entity-type brokers --all | grep 'ssl.keystore'
+ ssl.keystore.certificate.chain=null sensitive=true synonyms={}
+ ssl.keystore.key=null sensitive=true synonyms={}
+ ssl.keystore.location=null sensitive=false synonyms={}
+ ssl.keystore.password=null sensitive=true synonyms={}
+ ssl.keystore.type=JKS sensitive=false synonyms={DEFAULT_CONFIG:ssl.keystore.type=JKS}
+ ssl.keystore.certificate.chain=null sensitive=true synonyms={}
+ ssl.keystore.key=null sensitive=true synonyms={}
+ ssl.keystore.location=null sensitive=false synonyms={}
+ ssl.keystore.password=null sensitive=true synonyms={}
+```
+
+So, we can see from the above that, output that tls is disabled successfully.
+
+## Cleaning up
+
+To cleanup the Kubernetes resources created by this tutorial, run:
+
+```bash
+kubectl delete opsrequest drops-add-tls drops-remove drops-rotate drops-update-issuer
+kubectl delete druid -n demo druid-cluster
+kubectl delete issuer -n demo druid-ca-issuer dr-new-issuer
+kubectl delete ns demo
+```
+
+## Next Steps
+
+- Detail concepts of [Druid object](/docs/guides/druid/concepts/druid.md).
+- Different Druid topology clustering modes [here](/docs/guides/druid/clustering/_index.md).
+- Monitor your Druid database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/druid/monitoring/using-prometheus-operator.md).
+
+[//]: # (- Monitor your Druid database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/druid/monitoring/using-builtin-prometheus.md).)
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
+
diff --git a/docs/guides/druid/reconfigure-tls/images/druid-ui.png b/docs/guides/druid/reconfigure-tls/images/druid-ui.png
new file mode 100644
index 0000000000..f81925c59c
Binary files /dev/null and b/docs/guides/druid/reconfigure-tls/images/druid-ui.png differ
diff --git a/docs/guides/druid/reconfigure-tls/images/druid-with-tls.png b/docs/guides/druid/reconfigure-tls/images/druid-with-tls.png
new file mode 100644
index 0000000000..9f173c38c2
Binary files /dev/null and b/docs/guides/druid/reconfigure-tls/images/druid-with-tls.png differ
diff --git a/docs/guides/druid/reconfigure-tls/images/druid-without-tls.png b/docs/guides/druid/reconfigure-tls/images/druid-without-tls.png
new file mode 100644
index 0000000000..07aacd32a0
Binary files /dev/null and b/docs/guides/druid/reconfigure-tls/images/druid-without-tls.png differ
diff --git a/docs/guides/druid/reconfigure-tls/images/reconfigure-tls.png b/docs/guides/druid/reconfigure-tls/images/reconfigure-tls.png
new file mode 100644
index 0000000000..316d1a0aa5
Binary files /dev/null and b/docs/guides/druid/reconfigure-tls/images/reconfigure-tls.png differ
diff --git a/docs/guides/druid/reconfigure-tls/overview.md b/docs/guides/druid/reconfigure-tls/overview.md
new file mode 100644
index 0000000000..5b55da583f
--- /dev/null
+++ b/docs/guides/druid/reconfigure-tls/overview.md
@@ -0,0 +1,54 @@
+---
+title: Reconfiguring TLS/SSL
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-reconfigure-tls-overview
+ name: Overview
+ parent: guides-druid-reconfigure-tls
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Reconfiguring TLS of Druid
+
+This guide will give an overview on how KubeDB Ops-manager operator reconfigures TLS configuration i.e. add TLS, remove TLS, update issuer/cluster issuer or Certificates and rotate the certificates of `Druid`.
+
+## Before You Begin
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [Druid](/docs/guides/druid/concepts/druid.md)
+ - [DruidOpsRequest](/docs/guides/druid/concepts/druidopsrequest.md)
+
+## How Reconfiguring Druid TLS Configuration Process Works
+
+The following diagram shows how KubeDB Ops-manager operator reconfigures TLS of a `Druid`. Open the image in a new tab to see the enlarged version.
+
+
+
+The Reconfiguring Druid TLS process consists of the following steps:
+
+1. At first, a user creates a `Druid` Custom Resource Object (CRO).
+
+2. `KubeDB` Provisioner operator watches the `Druid` CRO.
+
+3. When the operator finds a `Druid` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc.
+
+4. Then, in order to reconfigure the TLS configuration of the `Druid` database the user creates a `DruidOpsRequest` CR with desired information.
+
+5. `KubeDB` Ops-manager operator watches the `DruidOpsRequest` CR.
+
+6. When it finds a `DruidOpsRequest` CR, it pauses the `Druid` object which is referred from the `DruidOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `Druid` object during the reconfiguring TLS process.
+
+7. Then the `KubeDB` Ops-manager operator will add, remove, update or rotate TLS configuration based on the Ops Request yaml.
+
+8. Then the `KubeDB` Ops-manager operator will restart all the Pods of the database so that they restart with the new TLS configuration defined in the `DruidOpsRequest` CR.
+
+9. After the successful reconfiguring of the `Druid` TLS, the `KubeDB` Ops-manager operator resumes the `Druid` object so that the `KubeDB` Provisioner operator resumes its usual operations.
+
+In the next docs, we are going to show a step by step guide on reconfiguring TLS configuration of a Druid database using `DruidOpsRequest` CRD.
\ No newline at end of file
diff --git a/docs/guides/druid/reconfigure-tls/yamls/deep-storage-config.yaml b/docs/guides/druid/reconfigure-tls/yamls/deep-storage-config.yaml
new file mode 100644
index 0000000000..3612595828
--- /dev/null
+++ b/docs/guides/druid/reconfigure-tls/yamls/deep-storage-config.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+
diff --git a/docs/guides/druid/reconfigure-tls/yamls/drops-add-tls.yaml b/docs/guides/druid/reconfigure-tls/yamls/drops-add-tls.yaml
new file mode 100644
index 0000000000..dd3654967b
--- /dev/null
+++ b/docs/guides/druid/reconfigure-tls/yamls/drops-add-tls.yaml
@@ -0,0 +1,23 @@
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: drops-add-tls
+ namespace: demo
+spec:
+ type: ReconfigureTLS
+ databaseRef:
+ name: druid-cluster
+ tls:
+ issuerRef:
+ name: druid-ca-issuer
+ kind: Issuer
+ apiGroup: "cert-manager.io"
+ certificates:
+ - alias: client
+ subject:
+ organizations:
+ - druid
+ organizationalUnits:
+ - client
+ timeout: 5m
+ apply: IfReady
diff --git a/docs/guides/druid/reconfigure-tls/yamls/drops-remove.yaml b/docs/guides/druid/reconfigure-tls/yamls/drops-remove.yaml
new file mode 100644
index 0000000000..af42b8d00d
--- /dev/null
+++ b/docs/guides/druid/reconfigure-tls/yamls/drops-remove.yaml
@@ -0,0 +1,11 @@
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: drops-remove
+ namespace: demo
+spec:
+ type: ReconfigureTLS
+ databaseRef:
+ name: druid-cluster
+ tls:
+ remove: true
diff --git a/docs/guides/druid/reconfigure-tls/yamls/drops-rotate.yaml b/docs/guides/druid/reconfigure-tls/yamls/drops-rotate.yaml
new file mode 100644
index 0000000000..f0be918f6f
--- /dev/null
+++ b/docs/guides/druid/reconfigure-tls/yamls/drops-rotate.yaml
@@ -0,0 +1,11 @@
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: druid-recon-tls-rotate
+ namespace: demo
+spec:
+ type: ReconfigureTLS
+ databaseRef:
+ name: druid-cluster
+ tls:
+ rotateCertificates: true
diff --git a/docs/guides/druid/reconfigure-tls/yamls/druid-ca-issuer.yaml b/docs/guides/druid/reconfigure-tls/yamls/druid-ca-issuer.yaml
new file mode 100644
index 0000000000..d6298c972c
--- /dev/null
+++ b/docs/guides/druid/reconfigure-tls/yamls/druid-ca-issuer.yaml
@@ -0,0 +1,8 @@
+apiVersion: cert-manager.io/v1
+kind: Issuer
+metadata:
+ name: druid-ca-issuer
+ namespace: demo
+spec:
+ ca:
+ secretName: druid-ca
diff --git a/docs/guides/druid/reconfigure-tls/yamls/druid-cluster.yaml b/docs/guides/druid/reconfigure-tls/yamls/druid-cluster.yaml
new file mode 100644
index 0000000000..6351c2ddda
--- /dev/null
+++ b/docs/guides/druid/reconfigure-tls/yamls/druid-cluster.yaml
@@ -0,0 +1,16 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ deletionPolicy: Delete
+
diff --git a/docs/guides/druid/reconfigure-tls/yamls/druid-new-issuer.yaml b/docs/guides/druid/reconfigure-tls/yamls/druid-new-issuer.yaml
new file mode 100644
index 0000000000..ede5d5177c
--- /dev/null
+++ b/docs/guides/druid/reconfigure-tls/yamls/druid-new-issuer.yaml
@@ -0,0 +1,8 @@
+apiVersion: cert-manager.io/v1
+kind: Issuer
+metadata:
+ name: dr-new-issuer
+ namespace: demo
+spec:
+ ca:
+ secretName: druid-new-ca
\ No newline at end of file
diff --git a/docs/guides/druid/reconfigure-tls/yamls/druid-update-tls-issuer.yaml b/docs/guides/druid/reconfigure-tls/yamls/druid-update-tls-issuer.yaml
new file mode 100644
index 0000000000..e876f4c3b8
--- /dev/null
+++ b/docs/guides/druid/reconfigure-tls/yamls/druid-update-tls-issuer.yaml
@@ -0,0 +1,14 @@
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: drops-update-issuer
+ namespace: demo
+spec:
+ type: ReconfigureTLS
+ databaseRef:
+ name: druid-cluster
+ tls:
+ issuerRef:
+ name: dr-new-issuer
+ kind: Issuer
+ apiGroup: "cert-manager.io"
diff --git a/docs/guides/druid/reconfigure/_index.md b/docs/guides/druid/reconfigure/_index.md
new file mode 100644
index 0000000000..4c3cfdfe58
--- /dev/null
+++ b/docs/guides/druid/reconfigure/_index.md
@@ -0,0 +1,10 @@
+---
+title: Reconfigure
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-reconfigure
+ name: Reconfigure
+ parent: guides-druid
+ weight: 110
+menu_name: docs_{{ .version }}
+---
\ No newline at end of file
diff --git a/docs/guides/druid/reconfigure/guide.md b/docs/guides/druid/reconfigure/guide.md
new file mode 100644
index 0000000000..cf85960a9c
--- /dev/null
+++ b/docs/guides/druid/reconfigure/guide.md
@@ -0,0 +1,704 @@
+---
+title: Reconfigure Druid Topology
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-reconfigure-guide
+ name: Reconfigure Druid
+ parent: guides-druid-reconfigure
+ weight: 30
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Reconfigure Druid Cluster
+
+This guide will show you how to use `KubeDB` Ops-manager operator to reconfigure a Druid Topology cluster.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster.
+
+- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [Druid](/docs/guides/druid/concepts/druid.md)
+ - [Topology](/docs/guides/druid/clustering/overview/index.md)
+ - [DruidOpsRequest](/docs/guides/druid/concepts/druidopsrequest.md)
+ - [Reconfigure Overview](/docs/guides/druid/reconfigure/overview.md)
+
+To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial.
+
+```bash
+$ kubectl create ns demo
+namespace/demo created
+```
+
+> **Note:** YAML files used in this tutorial are stored in [/docs/guides/druid/reconfigure/yamls](/docs/guides/druid/reconfigure/yamls) directory of [kubedb/docs](https://github.com/kubedb/docs) repository.
+
+Now, we are going to deploy a `Druid` cluster using a supported version by `KubeDB` operator. Then we are going to apply `DruidOpsRequest` to reconfigure its configuration.
+
+### Prepare Druid Cluster
+
+Now, we are going to deploy a `Druid` topology cluster with version `28.0.1`.
+
+#### Create External Dependency (Deep Storage)
+
+Before proceeding further, we need to prepare deep storage, which is one of the external dependency of Druid and used for storing the segments. It is a storage mechanism that Apache Druid does not provide. **Amazon S3**, **Google Cloud Storage**, or **Azure Blob Storage**, **S3-compatible storage** (like **Minio**), or **HDFS** are generally convenient options for deep storage.
+
+In this tutorial, we will run a `minio-server` as deep storage in our local `kind` cluster using `minio-operator` and create a bucket named `druid` in it, which the deployed druid database will use.
+
+```bash
+
+$ helm repo add minio https://operator.min.io/
+$ helm repo update minio
+$ helm upgrade --install --namespace "minio-operator" --create-namespace "minio-operator" minio/operator --set operator.replicaCount=1
+
+$ helm upgrade --install --namespace "demo" --create-namespace druid-minio minio/tenant \
+--set tenant.pools[0].servers=1 \
+--set tenant.pools[0].volumesPerServer=1 \
+--set tenant.pools[0].size=1Gi \
+--set tenant.certificate.requestAutoCert=false \
+--set tenant.buckets[0].name="druid" \
+--set tenant.pools[0].name="default"
+
+```
+
+Now we need to create a `Secret` named `deep-storage-config`. It contains the necessary connection information using which the druid database will connect to the deep storage.
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+```
+
+Let’s create the `deep-storage-config` Secret shown above:
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/restart/yamls/deep-storage-config.yaml
+secret/deep-storage-config created
+```
+
+Now, lets go ahead and create a druid database.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ deletionPolicy: Delete
+```
+
+Let's create the `Druid` CR we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/update-version/yamls/druid-cluster.yaml
+druid.kubedb.com/druid-cluster created
+```
+
+### Reconfigure using config secret
+
+Say we want to change the default maximum number of tasks the MiddleManager can accept. Let's create the `middleManagers.properties` file with our desire configurations.
+
+**middleManagers.properties:**
+
+```properties
+druid.worker.capacity=5
+```
+
+**historicals.properties:**
+
+```properties
+druid.processing.numThreads=3
+```
+
+Then, we will create a new secret with this configuration file.
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: config-secret
+ namespace: demo
+stringData:
+ middleManagers.properties: |-
+ druid.worker.capacity=5
+ historicals.properties: |-
+ druid.processing.numThreads=3
+```
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/update-version/yamls/config-secret.yaml
+secret/new-config created
+```
+
+### Check Current Configuration
+
+Before creating the druidOpsRequest, first
+Lets exec into one of the druid middleManagers pod that we have created and check the default configuration:
+
+Exec into the Druid middleManagers:
+
+```bash
+$ kubectl exec -it -n demo druid-cluster-middleManagers-0 -- bash
+Defaulted container "druid" out of: druid, init-druid (init)
+bash-5.1$
+```
+
+Now, execute the following commands to see the configurations:
+```bash
+bash-5.1$ cat conf/druid/cluster/data/middleManager/runtime.properties | grep druid.worker.capacity
+druid.worker.capacity=2
+```
+Here, we can see that our given configuration is applied to the Druid cluster for all brokers.
+
+Now, lets exec into one of the druid historicals pod that we have created and check the configurations are applied or not:
+
+Exec into the Druid historicals:
+
+```bash
+$ kubectl exec -it -n demo druid-cluster-historicals-0 -- bash
+Defaulted container "druid" out of: druid, init-druid (init)
+bash-5.1$
+```
+
+Now, execute the following commands to see the metadata storage directory:
+```bash
+bash-5.1$ cat conf/druid/cluster/data/historical/runtime.properties | grep druid.processing.numThreads
+druid.processing.numThreads=2
+```
+
+Here, we can see that our given configuration is applied to the historicals.
+
+### Check Configuration from Druid UI
+
+You can also see the configuration changes from the druid ui. For that, follow the following steps:
+
+First port-forward the port `8888` to local machine:
+
+```bash
+$ kubectl port-forward -n demo svc/druid-cluster-routers 8888
+Forwarding from 127.0.0.1:8888 -> 8888
+Forwarding from [::1]:8888 -> 8888
+```
+
+
+Now hit the `http://localhost:8888` from any browser, and you will be prompted to provide the credential of the druid database. By following the steps discussed below, you can get the credential generated by the KubeDB operator for your Druid database.
+
+**Connection information:**
+
+- Username:
+
+ ```bash
+ $ kubectl get secret -n demo druid-cluster-auth -o jsonpath='{.data.username}' | base64 -d
+ admin
+ ```
+
+- Password:
+
+ ```bash
+ $ kubectl get secret -n demo druid-cluster-auth -o jsonpath='{.data.password}' | base64 -d
+ LzJtVRX5E8MorFaf
+ ```
+
+After providing the credentials correctly, you should be able to access the web console like shown below.
+
+
+
+
+
+You can see that there are 2 task slots reflecting with the configuration `druid.worker.capacity=2`.
+
+
+#### Create DruidOpsRequest
+
+Now, we will use this secret to replace the previous secret using a `DruidOpsRequest` CR. The `DruidOpsRequest` yaml is given below,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: reconfigure-drops
+ namespace: demo
+spec:
+ type: Reconfigure
+ databaseRef:
+ name: druid-cluster
+ configuration:
+ configSecret:
+ name: new-config
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are reconfiguring `druid-prod` database.
+- `spec.type` specifies that we are performing `Reconfigure` on our database.
+- `spec.configSecret.name` specifies the name of the new secret.
+
+Let's create the `DruidOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/reconfigure/yamls/reconfigure-druid-ops.yaml
+druidopsrequest.ops.kubedb.com/reconfigure-drops created
+```
+
+#### Check new configuration
+
+If everything goes well, `KubeDB` Ops-manager operator will update the `configSecret` of `Druid` object.
+
+Let's wait for `DruidOpsRequest` to be `Successful`. Run the following command to watch `DruidOpsRequest` CR,
+
+```bash
+$ kubectl get druidopsrequests -n demo
+NAME TYPE STATUS AGE
+reconfigure-drops Reconfigure Successful 4m55s
+```
+
+We can see from the above output that the `DruidOpsRequest` has succeeded. If we describe the `DruidOpsRequest` we will get an overview of the steps that were followed to reconfigure the database.
+
+```bash
+$ kubectl describe druidopsrequest -n demo reconfigure-drops
+Name: reconfigure-drops
+Namespace: demo
+Labels:
+Annotations:
+API Version: ops.kubedb.com/v1alpha1
+Kind: DruidOpsRequest
+Metadata:
+ Creation Timestamp: 2024-08-02T05:08:37Z
+ Generation: 1
+ Resource Version: 332491
+ UID: b6e8cb1b-d29f-445e-bb01-60d29012c7eb
+Spec:
+ Apply: IfReady
+ Configuration:
+ Config Secret:
+ Name: new-kf-topology-custom-config
+ Database Ref:
+ Name: druid-prod
+ Timeout: 5m
+ Type: Reconfigure
+Status:
+ Conditions:
+ Last Transition Time: 2024-08-02T05:08:37Z
+ Message: Druid ops-request has started to reconfigure druid nodes
+ Observed Generation: 1
+ Reason: Reconfigure
+ Status: True
+ Type: Reconfigure
+ Last Transition Time: 2024-08-02T05:08:45Z
+ Message: check reconcile; ConditionStatus:False
+ Observed Generation: 1
+ Status: False
+ Type: CheckReconcile
+ Last Transition Time: 2024-08-02T05:09:42Z
+ Message: successfully reconciled the Druid with new configure
+ Observed Generation: 1
+ Reason: UpdatePetSets
+ Status: True
+ Type: UpdatePetSets
+ Last Transition Time: 2024-08-02T05:09:47Z
+ Message: get pod; ConditionStatus:True; PodName:druid-prod-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-prod-historicals-0
+ Last Transition Time: 2024-08-02T05:09:47Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-prod-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-prod-historicals-0
+ Last Transition Time: 2024-08-02T05:10:02Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-prod-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-prod-historicals-0
+ Last Transition Time: 2024-08-02T05:10:07Z
+ Message: get pod; ConditionStatus:True; PodName:druid-prod-historicals-1
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-prod-historicals-1
+ Last Transition Time: 2024-08-02T05:10:07Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-prod-historicals-1
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-prod-historicals-1
+ Last Transition Time: 2024-08-02T05:10:22Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-prod-historicals-1
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-prod-historicals-1
+ Last Transition Time: 2024-08-02T05:10:27Z
+ Message: get pod; ConditionStatus:True; PodName:druid-prod-middleManagers-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-prod-middleManagers-0
+ Last Transition Time: 2024-08-02T05:10:27Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-prod-middleManagers-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-prod-middleManagers-0
+ Last Transition Time: 2024-08-02T05:11:12Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-prod-middleManagers-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-prod-middleManagers-0
+ Last Transition Time: 2024-08-02T05:11:17Z
+ Message: get pod; ConditionStatus:True; PodName:druid-prod-middleManagers-1
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-prod-middleManagers-1
+ Last Transition Time: 2024-08-02T05:11:17Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-prod-middleManagers-1
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-prod-middleManagers-1
+ Last Transition Time: 2024-08-02T05:11:32Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-prod-middleManagers-1
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-prod-middleManagers-1
+ Last Transition Time: 2024-08-02T05:11:37Z
+ Message: Successfully restarted all nodes
+ Observed Generation: 1
+ Reason: RestartNodes
+ Status: True
+ Type: RestartNodes
+ Last Transition Time: 2024-08-02T05:11:39Z
+ Message: Successfully completed reconfigure druid
+ Observed Generation: 1
+ Reason: Successful
+ Status: True
+ Type: Successful
+ Observed Generation: 1
+ Phase: Successful
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Starting 3m7s KubeDB Ops-manager Operator Start processing for DruidOpsRequest: demo/reconfigure-drops
+ Normal Starting 3m7s KubeDB Ops-manager Operator Pausing Druid databse: demo/druid-prod
+ Normal Successful 3m7s KubeDB Ops-manager Operator Successfully paused Druid database: demo/druid-prod for DruidOpsRequest: reconfigure-drops
+ Warning check reconcile; ConditionStatus:False 2m59s KubeDB Ops-manager Operator check reconcile; ConditionStatus:False
+ Normal UpdatePetSets 2m2s KubeDB Ops-manager Operator successfully reconciled the Druid with new configure
+ Warning get pod; ConditionStatus:True; PodName:druid-prod-historicals-0 117s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-prod-historicals-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-prod-historicals-0 117s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-prod-historicals-0
+ Warning check pod running; ConditionStatus:False; PodName:druid-prod-historicals-0 112s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:druid-prod-historicals-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-prod-historicals-0 102s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-prod-historicals-0
+ Warning get pod; ConditionStatus:True; PodName:druid-prod-historicals-1 97s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-prod-historicals-1
+ Warning evict pod; ConditionStatus:True; PodName:druid-prod-historicals-1 97s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-prod-historicals-1
+ Warning check pod running; ConditionStatus:False; PodName:druid-prod-historicals-1 92s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:druid-prod-historicals-1
+ Warning check pod running; ConditionStatus:True; PodName:druid-prod-historicals-1 82s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-prod-historicals-1
+ Warning get pod; ConditionStatus:True; PodName:druid-prod-middleManagers-0 77s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-prod-middleManagers-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-prod-middleManagers-0 77s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-prod-middleManagers-0
+ Warning check pod running; ConditionStatus:False; PodName:druid-prod-middleManagers-0 72s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:druid-prod-middleManagers-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-prod-middleManagers-0 32s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-prod-middleManagers-0
+ Warning get pod; ConditionStatus:True; PodName:druid-prod-middleManagers-1 27s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-prod-middleManagers-1
+ Warning evict pod; ConditionStatus:True; PodName:druid-prod-middleManagers-1 27s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-prod-middleManagers-1
+ Warning check pod running; ConditionStatus:False; PodName:druid-prod-middleManagers-1 22s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:druid-prod-middleManagers-1
+ Warning check pod running; ConditionStatus:True; PodName:druid-prod-middleManagers-1 12s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-prod-middleManagers-1
+ Normal RestartNodes 7s KubeDB Ops-manager Operator Successfully restarted all nodes
+ Normal Starting 5s KubeDB Ops-manager Operator Resuming Druid database: demo/druid-prod
+ Normal Successful 5s KubeDB Ops-manager Operator Successfully resumed Druid database: demo/druid-prod for DruidOpsRequest: reconfigure-drops
+```
+
+Now let's exec one of the instance and run a druid-configs.sh command to check the new configuration we have provided.
+
+```bash
+$ kubectl exec -it -n demo druid-prod-middleManagers-0 -- druid-configs.sh --bootstrap-server localhost:9092 --command-config /opt/druid/config/clientauth.properties --describe --entity-type middleManagerss --all | grep 'log.retention.hours'
+ log.retention.hours=125 sensitive=false synonyms={STATIC_BROKER_CONFIG:log.retention.hours=125, DEFAULT_CONFIG:log.retention.hours=168}
+ log.retention.hours=125 sensitive=false synonyms={STATIC_BROKER_CONFIG:log.retention.hours=125, DEFAULT_CONFIG:log.retention.hours=168}
+```
+
+As we can see from the configuration of ready druid, the value of `log.retention.hours` has been changed from `100` to `125`. So the reconfiguration of the cluster is successful.
+
+
+### Reconfigure using apply config
+
+Now we will reconfigure this cluster again to set `log.retention.hours` to `150`. This time we won't use a new secret. We will use the `applyConfig` field of the `DruidOpsRequest`. This will merge the new config in the existing secret.
+
+#### Create DruidOpsRequest
+
+Now, we will use the new configuration in the `applyConfig` field in the `DruidOpsRequest` CR. The `DruidOpsRequest` yaml is given below,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: kfops-reconfigure-apply-topology
+ namespace: demo
+spec:
+ type: Reconfigure
+ databaseRef:
+ name: druid-prod
+ configuration:
+ applyConfig:
+ middleManagers.properties: |-
+ log.retention.hours=150
+ historicals.properties: |-
+ historicals.quorum.election.timeout.ms=4000
+ historicals.quorum.fetch.timeout.ms=5000
+ timeout: 5m
+ apply: IfReady
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are reconfiguring `druid-prod` cluster.
+- `spec.type` specifies that we are performing `Reconfigure` on druid.
+- `spec.configuration.applyConfig` specifies the new configuration that will be merged in the existing secret.
+
+Let's create the `DruidOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/druid/reconfigure/druid-reconfigure-apply-topology.yaml
+druidopsrequest.ops.kubedb.com/kfops-reconfigure-apply-topology created
+```
+
+#### Verify new configuration
+
+If everything goes well, `KubeDB` Ops-manager operator will merge this new config with the existing configuration.
+
+Let's wait for `DruidOpsRequest` to be `Successful`. Run the following command to watch `DruidOpsRequest` CR,
+
+```bash
+$ kubectl get druidopsrequests -n demo kfops-reconfigure-apply-topology
+NAME TYPE STATUS AGE
+kfops-reconfigure-apply-topology Reconfigure Successful 55s
+```
+
+We can see from the above output that the `DruidOpsRequest` has succeeded. If we describe the `DruidOpsRequest` we will get an overview of the steps that were followed to reconfigure the cluster.
+
+```bash
+$ kubectl describe druidopsrequest -n demo kfops-reconfigure-apply-topology
+Name: kfops-reconfigure-apply-topology
+Namespace: demo
+Labels:
+Annotations:
+API Version: ops.kubedb.com/v1alpha1
+Kind: DruidOpsRequest
+Metadata:
+ Creation Timestamp: 2024-08-02T05:14:42Z
+ Generation: 1
+ Resource Version: 332996
+ UID: 551d2c92-9431-47a7-a699-8f8115131b49
+Spec:
+ Apply: IfReady
+ Configuration:
+ Apply Config:
+ middleManagers.properties: log.retention.hours=150
+ historicals.properties: historicals.quorum.election.timeout.ms=4000
+historicals.quorum.fetch.timeout.ms=5000
+ Database Ref:
+ Name: druid-prod
+ Timeout: 5m
+ Type: Reconfigure
+Status:
+ Conditions:
+ Last Transition Time: 2024-08-02T05:14:42Z
+ Message: Druid ops-request has started to reconfigure druid nodes
+ Observed Generation: 1
+ Reason: Reconfigure
+ Status: True
+ Type: Reconfigure
+ Last Transition Time: 2024-08-02T05:14:45Z
+ Message: Successfully prepared user provided custom config secret
+ Observed Generation: 1
+ Reason: PrepareCustomConfig
+ Status: True
+ Type: PrepareCustomConfig
+ Last Transition Time: 2024-08-02T05:14:52Z
+ Message: successfully reconciled the Druid with new configure
+ Observed Generation: 1
+ Reason: UpdatePetSets
+ Status: True
+ Type: UpdatePetSets
+ Last Transition Time: 2024-08-02T05:14:57Z
+ Message: get pod; ConditionStatus:True; PodName:druid-prod-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-prod-historicals-0
+ Last Transition Time: 2024-08-02T05:14:57Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-prod-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-prod-historicals-0
+ Last Transition Time: 2024-08-02T05:15:07Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-prod-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-prod-historicals-0
+ Last Transition Time: 2024-08-02T05:15:12Z
+ Message: get pod; ConditionStatus:True; PodName:druid-prod-historicals-1
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-prod-historicals-1
+ Last Transition Time: 2024-08-02T05:15:12Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-prod-historicals-1
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-prod-historicals-1
+ Last Transition Time: 2024-08-02T05:15:27Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-prod-historicals-1
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-prod-historicals-1
+ Last Transition Time: 2024-08-02T05:15:32Z
+ Message: get pod; ConditionStatus:True; PodName:druid-prod-middleManagers-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-prod-middleManagers-0
+ Last Transition Time: 2024-08-02T05:15:32Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-prod-middleManagers-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-prod-middleManagers-0
+ Last Transition Time: 2024-08-02T05:16:07Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-prod-middleManagers-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-prod-middleManagers-0
+ Last Transition Time: 2024-08-02T05:16:12Z
+ Message: get pod; ConditionStatus:True; PodName:druid-prod-middleManagers-1
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-prod-middleManagers-1
+ Last Transition Time: 2024-08-02T05:16:12Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-prod-middleManagers-1
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-prod-middleManagers-1
+ Last Transition Time: 2024-08-02T05:16:27Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-prod-middleManagers-1
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-prod-middleManagers-1
+ Last Transition Time: 2024-08-02T05:16:32Z
+ Message: Successfully restarted all nodes
+ Observed Generation: 1
+ Reason: RestartNodes
+ Status: True
+ Type: RestartNodes
+ Last Transition Time: 2024-08-02T05:16:35Z
+ Message: Successfully completed reconfigure druid
+ Observed Generation: 1
+ Reason: Successful
+ Status: True
+ Type: Successful
+ Observed Generation: 1
+ Phase: Successful
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Starting 2m6s KubeDB Ops-manager Operator Start processing for DruidOpsRequest: demo/kfops-reconfigure-apply-topology
+ Normal Starting 2m6s KubeDB Ops-manager Operator Pausing Druid databse: demo/druid-prod
+ Normal Successful 2m6s KubeDB Ops-manager Operator Successfully paused Druid database: demo/druid-prod for DruidOpsRequest: kfops-reconfigure-apply-topology
+ Normal UpdatePetSets 116s KubeDB Ops-manager Operator successfully reconciled the Druid with new configure
+ Warning get pod; ConditionStatus:True; PodName:druid-prod-historicals-0 111s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-prod-historicals-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-prod-historicals-0 111s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-prod-historicals-0
+ Warning check pod running; ConditionStatus:False; PodName:druid-prod-historicals-0 106s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:druid-prod-historicals-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-prod-historicals-0 101s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-prod-historicals-0
+ Warning get pod; ConditionStatus:True; PodName:druid-prod-historicals-1 96s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-prod-historicals-1
+ Warning evict pod; ConditionStatus:True; PodName:druid-prod-historicals-1 96s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-prod-historicals-1
+ Warning check pod running; ConditionStatus:False; PodName:druid-prod-historicals-1 91s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:druid-prod-historicals-1
+ Warning check pod running; ConditionStatus:True; PodName:druid-prod-historicals-1 81s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-prod-historicals-1
+ Warning get pod; ConditionStatus:True; PodName:druid-prod-middleManagers-0 76s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-prod-middleManagers-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-prod-middleManagers-0 76s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-prod-middleManagers-0
+ Warning check pod running; ConditionStatus:False; PodName:druid-prod-middleManagers-0 71s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:druid-prod-middleManagers-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-prod-middleManagers-0 41s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-prod-middleManagers-0
+ Warning get pod; ConditionStatus:True; PodName:druid-prod-middleManagers-1 36s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-prod-middleManagers-1
+ Warning evict pod; ConditionStatus:True; PodName:druid-prod-middleManagers-1 36s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-prod-middleManagers-1
+ Warning check pod running; ConditionStatus:False; PodName:druid-prod-middleManagers-1 31s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:druid-prod-middleManagers-1
+ Warning check pod running; ConditionStatus:True; PodName:druid-prod-middleManagers-1 21s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-prod-middleManagers-1
+ Normal RestartNodes 15s KubeDB Ops-manager Operator Successfully restarted all nodes
+ Normal Starting 14s KubeDB Ops-manager Operator Resuming Druid database: demo/druid-prod
+ Normal Successful 14s KubeDB Ops-manager Operator Successfully resumed Druid database: demo/druid-prod for DruidOpsRequest: kfops-reconfigure-apply-topology
+```
+
+Lets exec into one of the druid middleManagers pod that have updated and check the new configurations are applied or not:
+
+Exec into the Druid middleManagers:
+
+```bash
+$ kubectl exec -it -n demo druid-with-config-middleManagers-0 -- bash
+Defaulted container "druid" out of: druid, init-druid (init)
+bash-5.1$
+```
+
+Now, execute the following commands to see the configurations:
+```bash
+bash-5.1$ cat conf/druid/cluster/data/middleManager/runtime.properties | grep druid.worker.capacity
+druid.worker.capacity=5
+```
+Here, we can see that our given configuration is applied to the Druid cluster for all brokers.
+
+Now, lets exec into one of the druid historicals pod that have updated and check the new configurations are applied or not:
+
+Exec into the Druid historicals:
+
+```bash
+$ kubectl exec -it -n demo druid-with-config-historicals-0 -- bash
+Defaulted container "druid" out of: druid, init-druid (init)
+bash-5.1$
+```
+
+Now, execute the following commands to see the metadata storage directory:
+```bash
+bash-5.1$ cat conf/druid/cluster/data/historical/runtime.properties | grep druid.processing.numThreads
+druid.processing.numThreads=3
+```
+
+Here, we can see that our given configuration is applied to the historicals.
+
+### Verify Configuration Change from Druid UI
+
+You can access the UI similarly by doing port-forward as mentioned in [Check Configuration from Druid UI](/docs/guides/druid/reconfigure/#CheckConfigurationfromDruidUI)
+
+You should be able to see the following changes in the UI:
+
+
+
+
+
+You can see that there are 5 task slots reflecting with our provided custom configuration of `druid.worker.capacity=5`.
+
+## Cleaning Up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+```bash
+kubectl delete kf -n demo druid-cluster
+kubectl delete druidopsrequest -n demo reconfigure-drops
+kubectl delete secret -n demo new-config
+kubectl delete ns demo
+```
+
+## Next Steps
+
+- Detail concepts of [Druid object](/docs/guides/druid/concepts/druid.md).
+- Different Druid topology clustering modes [here](/docs/guides/druid/clustering/_index.md).
+- Monitor your Druid database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/druid/monitoring/using-prometheus-operator.md).
+
+[//]: # (- Monitor your Druid database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/druid/monitoring/using-builtin-prometheus.md).)
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
diff --git a/docs/guides/druid/reconfigure/images/druid-ui.png b/docs/guides/druid/reconfigure/images/druid-ui.png
new file mode 100644
index 0000000000..af798ee7b4
Binary files /dev/null and b/docs/guides/druid/reconfigure/images/druid-ui.png differ
diff --git a/docs/guides/druid/reconfigure/images/reconfigure.svg b/docs/guides/druid/reconfigure/images/reconfigure.svg
new file mode 100644
index 0000000000..84526d2735
--- /dev/null
+++ b/docs/guides/druid/reconfigure/images/reconfigure.svg
@@ -0,0 +1,120 @@
+
diff --git a/docs/guides/druid/reconfigure/overview.md b/docs/guides/druid/reconfigure/overview.md
new file mode 100644
index 0000000000..e2ea2b268e
--- /dev/null
+++ b/docs/guides/druid/reconfigure/overview.md
@@ -0,0 +1,54 @@
+---
+title: Reconfiguring Druid
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-reconfigure-overview
+ name: Overview
+ parent: guides-druid-reconfigure
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Reconfiguring Druid
+
+This guide will give an overview on how KubeDB Ops-manager operator reconfigures `Druid` components such as Combined, Broker, Controller, etc.
+
+## Before You Begin
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [Druid](/docs/guides/kafka/concepts/kafka.md)
+ - [DruidOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md)
+
+## How Reconfiguring Druid Process Works
+
+The following diagram shows how KubeDB Ops-manager operator reconfigures `Druid` components. Open the image in a new tab to see the enlarged version.
+
+
+
+The Reconfiguring Druid process consists of the following steps:
+
+1. At first, a user creates a `Druid` Custom Resource (CR).
+
+2. `KubeDB` Provisioner operator watches the `Druid` CR.
+
+3. When the operator finds a `Druid` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc.
+
+4. Then, in order to reconfigure the various components (ie. Coordinators, Overlords, Historicals, MiddleManagers, Brokers, Routers) of the `Druid`, the user creates a `DruidOpsRequest` CR with desired information.
+
+5. `KubeDB` Ops-manager operator watches the `DruidOpsRequest` CR.
+
+6. When it finds a `DruidOpsRequest` CR, it halts the `Druid` object which is referred from the `DruidOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `Druid` object during the reconfiguring process.
+
+7. Then the `KubeDB` Ops-manager operator will replace the existing configuration with the new configuration provided or merge the new configuration with the existing configuration according to the `MogoDBOpsRequest` CR.
+
+8. Then the `KubeDB` Ops-manager operator will restart the related PetSet Pods so that they restart with the new configuration defined in the `DruidOpsRequest` CR.
+
+9. After the successful reconfiguring of the `Druid` components, the `KubeDB` Ops-manager operator resumes the `Druid` object so that the `KubeDB` Provisioner operator resumes its usual operations.
+
+In the next docs, we are going to show a step-by-step guide on reconfiguring Druid components using `DruidOpsRequest` CRD.
\ No newline at end of file
diff --git a/docs/guides/druid/reconfigure/yamls/config-secret.yaml b/docs/guides/druid/reconfigure/yamls/config-secret.yaml
new file mode 100644
index 0000000000..6067ee7dd2
--- /dev/null
+++ b/docs/guides/druid/reconfigure/yamls/config-secret.yaml
@@ -0,0 +1,10 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: new-config
+ namespace: demo
+stringData:
+ middleManagers.properties: |-
+ druid.worker.capacity=5
+ historicals.properties: |-
+ druid.processing.numThreads=3
diff --git a/docs/guides/druid/reconfigure/yamls/deep-storage-config.yaml b/docs/guides/druid/reconfigure/yamls/deep-storage-config.yaml
new file mode 100644
index 0000000000..3612595828
--- /dev/null
+++ b/docs/guides/druid/reconfigure/yamls/deep-storage-config.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+
diff --git a/docs/guides/druid/reconfigure/yamls/druid-cluster.yaml b/docs/guides/druid/reconfigure/yamls/druid-cluster.yaml
new file mode 100644
index 0000000000..f7a695b062
--- /dev/null
+++ b/docs/guides/druid/reconfigure/yamls/druid-cluster.yaml
@@ -0,0 +1,15 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ deletionPolicy: WipeOut
diff --git a/docs/guides/druid/reconfigure/yamls/reconfigure-druid-ops.yaml b/docs/guides/druid/reconfigure/yamls/reconfigure-druid-ops.yaml
new file mode 100644
index 0000000000..cc5f789a54
--- /dev/null
+++ b/docs/guides/druid/reconfigure/yamls/reconfigure-druid-ops.yaml
@@ -0,0 +1,12 @@
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: reconfigure-drops
+ namespace: demo
+spec:
+ type: Reconfigure
+ databaseRef:
+ name: druid-cluster
+ configuration:
+ configSecret:
+ name: new-config
\ No newline at end of file
diff --git a/docs/guides/druid/restart/_index.md b/docs/guides/druid/restart/_index.md
new file mode 100644
index 0000000000..7d23da5218
--- /dev/null
+++ b/docs/guides/druid/restart/_index.md
@@ -0,0 +1,10 @@
+---
+title: Restart Druid
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-restart
+ name: Restart
+ parent: guides-druid
+ weight: 130
+menu_name: docs_{{ .version }}
+---
\ No newline at end of file
diff --git a/docs/guides/druid/restart/guide.md b/docs/guides/druid/restart/guide.md
new file mode 100644
index 0000000000..b172825913
--- /dev/null
+++ b/docs/guides/druid/restart/guide.md
@@ -0,0 +1,283 @@
+---
+title: Restart Druid
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-restart-guide
+ name: Restart Druid
+ parent: guides-druid-restart
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Restart Druid
+
+KubeDB supports restarting the Druid database via a DruidOpsRequest. Restarting is useful if some pods are got stuck in some phase, or they are not working correctly. This tutorial will show you how to use that.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial.
+
+```bash
+ $ kubectl create ns demo
+ namespace/demo created
+ ```
+
+> Note: YAML files used in this tutorial are stored in [docs/examples/druid](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/druid) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
+
+## Deploy Druid
+
+In this section, we are going to deploy a Druid database using KubeDB.
+
+### Create External Dependency (Deep Storage)
+
+Before proceeding further, we need to prepare deep storage, which is one of the external dependency of Druid and used for storing the segments. It is a storage mechanism that Apache Druid does not provide. **Amazon S3**, **Google Cloud Storage**, or **Azure Blob Storage**, **S3-compatible storage** (like **Minio**), or **HDFS** are generally convenient options for deep storage.
+
+In this tutorial, we will run a `minio-server` as deep storage in our local `kind` cluster using `minio-operator` and create a bucket named `druid` in it, which the deployed druid database will use.
+
+```bash
+
+$ helm repo add minio https://operator.min.io/
+$ helm repo update minio
+$ helm upgrade --install --namespace "minio-operator" --create-namespace "minio-operator" minio/operator --set operator.replicaCount=1
+
+$ helm upgrade --install --namespace "demo" --create-namespace druid-minio minio/tenant \
+--set tenant.pools[0].servers=1 \
+--set tenant.pools[0].volumesPerServer=1 \
+--set tenant.pools[0].size=1Gi \
+--set tenant.certificate.requestAutoCert=false \
+--set tenant.buckets[0].name="druid" \
+--set tenant.pools[0].name="default"
+
+```
+
+Now we need to create a `Secret` named `deep-storage-config`. It contains the necessary connection information using which the druid database will connect to the deep storage.
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+```
+
+Let’s create the `deep-storage-config` Secret shown above:
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/restart/yamls/deep-storage-config.yaml
+secret/deep-storage-config created
+```
+
+Now, lets go ahead and create a druid database.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ deletionPolicy: Delete
+```
+
+Let's create the `Druid` CR we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/update-version/yamls/druid-cluster.yaml
+druid.kubedb.com/druid-cluster created
+```
+
+## Apply Restart opsRequest
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: restart
+ namespace: demo
+spec:
+ type: Restart
+ databaseRef:
+ name: druid-cluster
+ timeout: 5m
+ apply: Always
+```
+
+- `spec.type` specifies the Type of the ops Request
+- `spec.databaseRef` holds the name of the Druid CR. It should be available in the same namespace as the opsRequest
+- The meaning of `spec.timeout` & `spec.apply` fields will be found [here](/docs/guides/druid/concepts/druidopsrequest.md#spectimeout)
+
+Let's create the `DruidOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/restart/restart.yaml
+druidopsrequest.ops.kubedb.com/restart created
+```
+
+Now the Ops-manager operator will first restart the controller pods, then broker of the referenced druid.
+
+```shell
+$ kubectl get drops -n demo
+NAME TYPE STATUS AGE
+restart Restart Successful 2m11s
+
+$ kubectl get drops -n demo restart -oyaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ annotations:
+ kubectl.kubernetes.io/last-applied-configuration: |
+ {"apiVersion":"ops.kubedb.com/v1alpha1","kind":"DruidOpsRequest","metadata":{"annotations":{},"name":"restart","namespace":"demo"},"spec":{"apply":"Always","databaseRef":{"name":"druid-cluster"},"timeout":"5m","type":"Restart"}}
+ creationTimestamp: "2024-10-21T10:30:53Z"
+ generation: 1
+ name: restart
+ namespace: demo
+ resourceVersion: "83200"
+ uid: 0fcbc7d4-593f-45f7-8631-7483805efe1e
+spec:
+ apply: Always
+ databaseRef:
+ name: druid-cluster
+ timeout: 5m
+ type: Restart
+status:
+ conditions:
+ - lastTransitionTime: "2024-10-21T10:30:53Z"
+ message: Druid ops-request has started to restart druid nodes
+ observedGeneration: 1
+ reason: Restart
+ status: "True"
+ type: Restart
+ - lastTransitionTime: "2024-10-21T10:31:51Z"
+ message: Successfully Restarted Druid nodes
+ observedGeneration: 1
+ reason: RestartNodes
+ status: "True"
+ type: RestartNodes
+ - lastTransitionTime: "2024-10-21T10:31:01Z"
+ message: get pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ observedGeneration: 1
+ status: "True"
+ type: GetPod--druid-cluster-historicals-0
+ - lastTransitionTime: "2024-10-21T10:31:01Z"
+ message: evict pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ observedGeneration: 1
+ status: "True"
+ type: EvictPod--druid-cluster-historicals-0
+ - lastTransitionTime: "2024-10-21T10:31:06Z"
+ message: check pod running; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ observedGeneration: 1
+ status: "True"
+ type: CheckPodRunning--druid-cluster-historicals-0
+ - lastTransitionTime: "2024-10-21T10:31:11Z"
+ message: get pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ observedGeneration: 1
+ status: "True"
+ type: GetPod--druid-cluster-middlemanagers-0
+ - lastTransitionTime: "2024-10-21T10:31:11Z"
+ message: evict pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ observedGeneration: 1
+ status: "True"
+ type: EvictPod--druid-cluster-middlemanagers-0
+ - lastTransitionTime: "2024-10-21T10:31:16Z"
+ message: check pod running; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ observedGeneration: 1
+ status: "True"
+ type: CheckPodRunning--druid-cluster-middlemanagers-0
+ - lastTransitionTime: "2024-10-21T10:31:21Z"
+ message: get pod; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ observedGeneration: 1
+ status: "True"
+ type: GetPod--druid-cluster-brokers-0
+ - lastTransitionTime: "2024-10-21T10:31:21Z"
+ message: evict pod; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ observedGeneration: 1
+ status: "True"
+ type: EvictPod--druid-cluster-brokers-0
+ - lastTransitionTime: "2024-10-21T10:31:26Z"
+ message: check pod running; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ observedGeneration: 1
+ status: "True"
+ type: CheckPodRunning--druid-cluster-brokers-0
+ - lastTransitionTime: "2024-10-21T10:31:31Z"
+ message: get pod; ConditionStatus:True; PodName:druid-cluster-routers-0
+ observedGeneration: 1
+ status: "True"
+ type: GetPod--druid-cluster-routers-0
+ - lastTransitionTime: "2024-10-21T10:31:31Z"
+ message: evict pod; ConditionStatus:True; PodName:druid-cluster-routers-0
+ observedGeneration: 1
+ status: "True"
+ type: EvictPod--druid-cluster-routers-0
+ - lastTransitionTime: "2024-10-21T10:31:36Z"
+ message: check pod running; ConditionStatus:True; PodName:druid-cluster-routers-0
+ observedGeneration: 1
+ status: "True"
+ type: CheckPodRunning--druid-cluster-routers-0
+ - lastTransitionTime: "2024-10-21T10:31:41Z"
+ message: get pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ observedGeneration: 1
+ status: "True"
+ type: GetPod--druid-cluster-coordinators-0
+ - lastTransitionTime: "2024-10-21T10:31:41Z"
+ message: evict pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ observedGeneration: 1
+ status: "True"
+ type: EvictPod--druid-cluster-coordinators-0
+ - lastTransitionTime: "2024-10-21T10:31:46Z"
+ message: check pod running; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ observedGeneration: 1
+ status: "True"
+ type: CheckPodRunning--druid-cluster-coordinators-0
+ - lastTransitionTime: "2024-10-21T10:31:51Z"
+ message: Controller has successfully restart the Druid replicas
+ observedGeneration: 1
+ reason: Successful
+ status: "True"
+ type: Successful
+ observedGeneration: 1
+ phase: Successful
+```
+
+## Cleaning up
+
+To cleanup the Kubernetes resources created by this tutorial, run:
+
+```bash
+kubectl delete druidopsrequest -n demo restart
+kubectl delete druid -n demo druid-cluster
+kubectl delete ns demo
+```
+
+## Next Steps
+
+- Detail concepts of [Druid object](/docs/guides/druid/concepts/druid.md).
+- Different Druid topology clustering modes [here](/docs/guides/druid/clustering/_index.md).
+- Monitor your Druid database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/druid/monitoring/using-prometheus-operator.md).
+
+[//]: # (- Monitor your Druid database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/druid/monitoring/using-builtin-prometheus.md).)
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
diff --git a/docs/guides/druid/restart/yamls/deep-storage-config.yaml b/docs/guides/druid/restart/yamls/deep-storage-config.yaml
new file mode 100644
index 0000000000..3612595828
--- /dev/null
+++ b/docs/guides/druid/restart/yamls/deep-storage-config.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+
diff --git a/docs/guides/druid/restart/yamls/druid-cluster.yaml b/docs/guides/druid/restart/yamls/druid-cluster.yaml
new file mode 100644
index 0000000000..6351c2ddda
--- /dev/null
+++ b/docs/guides/druid/restart/yamls/druid-cluster.yaml
@@ -0,0 +1,16 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ deletionPolicy: Delete
+
diff --git a/docs/guides/druid/restart/yamls/restart.yaml b/docs/guides/druid/restart/yamls/restart.yaml
new file mode 100644
index 0000000000..7130c4c865
--- /dev/null
+++ b/docs/guides/druid/restart/yamls/restart.yaml
@@ -0,0 +1,11 @@
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: restart
+ namespace: demo
+spec:
+ type: Restart
+ databaseRef:
+ name: druid-cluster
+ timeout: 5m
+ apply: Always
diff --git a/docs/guides/druid/scaling/_index.md b/docs/guides/druid/scaling/_index.md
new file mode 100644
index 0000000000..b5da417adc
--- /dev/null
+++ b/docs/guides/druid/scaling/_index.md
@@ -0,0 +1,10 @@
+---
+title: Scaling Druid
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-scaling
+ name: Scaling
+ parent: guides-druid
+ weight: 70
+menu_name: docs_{{ .version }}
+---
\ No newline at end of file
diff --git a/docs/guides/druid/scaling/horizontal-scaling/_index.md b/docs/guides/druid/scaling/horizontal-scaling/_index.md
new file mode 100644
index 0000000000..73d3017f6d
--- /dev/null
+++ b/docs/guides/druid/scaling/horizontal-scaling/_index.md
@@ -0,0 +1,10 @@
+---
+title: Horizontal Scaling
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-scaling-horizontal-scaling
+ name: Horizontal Scaling
+ parent: guides-druid-scaling
+ weight: 10
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/druid/scaling/horizontal-scaling/guide.md b/docs/guides/druid/scaling/horizontal-scaling/guide.md
new file mode 100644
index 0000000000..6e6648b4f8
--- /dev/null
+++ b/docs/guides/druid/scaling/horizontal-scaling/guide.md
@@ -0,0 +1,603 @@
+---
+title: Horizontal Scaling Druid Cluster
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-scaling-horizontal-scaling-guide
+ name: Druid Horizontal Scaling
+ parent: guides-druid-scaling-horizontal-scaling
+ weight: 20
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Horizontal Scale Druid Topology Cluster
+
+This guide will show you how to use `KubeDB` Ops-manager operator to scale the Druid topology cluster.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [Druid](/docs/guides/druid/concepts/druid.md)
+ - [Topology](/docs/guides/druid/clustering/overview/index.md)
+ - [DruidOpsRequest](/docs/guides/druid/concepts/druidopsrequest.md)
+ - [Horizontal Scaling Overview](/docs/guides/druid/scaling/horizontal-scaling/overview.md)
+
+To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial.
+
+```bash
+$ kubectl create ns demo
+namespace/demo created
+```
+
+> **Note:** YAML files used in this tutorial are stored in [docs/examples/druid](/docs/examples/druid) directory of [kubedb/docs](https://github.com/kubedb/docs) repository.
+
+## Apply Horizontal Scaling on Druid Cluster
+
+Here, we are going to deploy a `Druid` cluster using a supported version by `KubeDB` operator. Then we are going to apply horizontal scaling on it.
+
+### Prepare Druid Topology cluster
+
+Now, we are going to deploy a `Druid` topology cluster with version `28.0.1`.
+
+### Create External Dependency (Deep Storage)
+
+Before proceeding further, we need to prepare deep storage, which is one of the external dependency of Druid and used for storing the segments. It is a storage mechanism that Apache Druid does not provide. **Amazon S3**, **Google Cloud Storage**, or **Azure Blob Storage**, **S3-compatible storage** (like **Minio**), or **HDFS** are generally convenient options for deep storage.
+
+In this tutorial, we will run a `minio-server` as deep storage in our local `kind` cluster using `minio-operator` and create a bucket named `druid` in it, which the deployed druid database will use.
+
+```bash
+
+$ helm repo add minio https://operator.min.io/
+$ helm repo update minio
+$ helm upgrade --install --namespace "minio-operator" --create-namespace "minio-operator" minio/operator --set operator.replicaCount=1
+
+$ helm upgrade --install --namespace "demo" --create-namespace druid-minio minio/tenant \
+--set tenant.pools[0].servers=1 \
+--set tenant.pools[0].volumesPerServer=1 \
+--set tenant.pools[0].size=1Gi \
+--set tenant.certificate.requestAutoCert=false \
+--set tenant.buckets[0].name="druid" \
+--set tenant.pools[0].name="default"
+
+```
+
+Now we need to create a `Secret` named `deep-storage-config`. It contains the necessary connection information using which the druid database will connect to the deep storage.
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+```
+
+Let’s create the `deep-storage-config` Secret shown above:
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/scaling/horizontal-scaling/yamls/deep-storage-config.yaml
+secret/deep-storage-config created
+```
+
+### Deploy Druid topology cluster
+
+In this section, we are going to deploy a Druid topology cluster. Then, in the next section we will scale the cluster using `DruidOpsRequest` CRD. Below is the YAML of the `Druid` CR that we are going to create,
+
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ deletionPolicy: Delete
+```
+
+Let's create the `Druid` CR we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/druid/scaling/horizontal-scaling/yamls/druid-topology.yaml
+druid.kubedb.com/druid-cluster created
+```
+
+Now, wait until `druid-cluster` has status `Ready`. i.e,
+
+```bash
+$ kubectl get dr -n demo -w
+NAME TYPE VERSION STATUS AGE
+druid-cluster kubedb.com/v1aplha2 28.0.1 Provisioning 0s
+druid-cluster kubedb.com/v1aplha2 28.0.1 Provisioning 24s
+.
+.
+druid-cluster kubedb.com/v1aplha2 28.0.1 Ready 92s
+```
+
+Let's check the number of replicas has from druid object, number of pods the petset have,
+
+**Coordinators Replicas**
+
+```bash
+$ kubectl get druid -n demo druid-cluster -o json | jq '.spec.topology.coordinators.replicas'
+1
+
+$ kubectl get petset -n demo druid-cluster-coordinators -o json | jq '.spec.replicas'
+1
+```
+
+**Historicals Replicas**
+
+```bash
+$ kubectl get druid -n demo druid-cluster -o json | jq '.spec.topology.historicals.replicas'
+1
+
+$ kubectl get petset -n demo druid-cluster-historicals -o json | jq '.spec.replicas'
+1
+```
+
+We can see from commands that the cluster has 1 replicas for both coordinators and historicals.
+
+### Check Replica Count from Druid UI
+
+You can also see the replica count of each node from the druid ui. For that, follow the following steps:
+
+First port-forward the port `8888` to local machine:
+
+```bash
+$ kubectl port-forward -n demo svc/druid-cluster-routers 8888
+Forwarding from 127.0.0.1:8888 -> 8888
+Forwarding from [::1]:8888 -> 8888
+```
+
+
+Now hit the `http://localhost:8888` from any browser, and you will be prompted to provide the credential of the druid database. By following the steps discussed below, you can get the credential generated by the KubeDB operator for your Druid database.
+
+**Connection information:**
+
+- Username:
+
+ ```bash
+ $ kubectl get secret -n demo druid-cluster-admin-cred -o jsonpath='{.data.username}' | base64 -d
+ admin
+ ```
+
+- Password:
+
+ ```bash
+ $ kubectl get secret -n demo druid-cluster-admin-cred -o jsonpath='{.data.password}' | base64 -d
+ LzJtVRX5E8MorFaf
+ ```
+
+After providing the credentials correctly, you should be able to access the web console like shown below.
+
+
+
+
+
+
+Here, we can see that there is 1 replica of each node including `coordinators` and `historicals`.
+
+We are now ready to apply the `DruidOpsRequest` CR to scale this cluster.
+
+## Scale Up Replicas
+
+Here, we are going to scale up the replicas of the topology cluster to meet the desired number of replicas after scaling.
+
+### Create DruidOpsRequest
+
+In order to scale up the replicas of the topology cluster, we have to create a `DruidOpsRequest` CR with our desired replicas. Below is the YAML of the `DruidOpsRequest` CR that we are going to create,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: druid-hscale-up
+ namespace: demo
+spec:
+ type: HorizontalScaling
+ databaseRef:
+ name: druid-cluster
+ horizontalScaling:
+ topology:
+ coordinators: 2
+ historicals: 2
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing horizontal scaling operation on `druid-cluster` cluster.
+- `spec.type` specifies that we are performing `HorizontalScaling` on druid.
+- `spec.horizontalScaling.topology.coordinators` specifies the desired replicas after scaling for coordinators.
+- `spec.horizontalScaling.topology.historicals` specifies the desired replicas after scaling for historicals.
+
+> **Note:** Similarly you can scale other druid nodes horizontally by specifying the following fields:
+ > - For `overlords` use `spec.horizontalScaling.topology.overlords`.
+ > - For `brokers` use `spec.horizontalScaling.topology.brokers`.
+ > - For `middleManagers` use `spec.horizontalScaling.topology.middleManagers`.
+ > - For `routers` use `spec.horizontalScaling.topology.routers`.
+
+Let's create the `DruidOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/scaling/horizontal-scaling/yamls/druid-hscale-up.yaml
+druidopsrequest.ops.kubedb.com/druid-hscale-up created
+```
+
+### Verify Topology cluster replicas scaled up successfully
+
+If everything goes well, `KubeDB` Ops-manager operator will update the replicas of `Druid` object and related `PetSets` and `Pods`.
+
+Let's wait for `DruidOpsRequest` to be `Successful`. Run the following command to watch `DruidOpsRequest` CR,
+
+```bash
+$ watch kubectl get druidopsrequest -n demo
+NAME TYPE STATUS AGE
+druid-hscale-up HorizontalScaling Successful 106s
+```
+
+We can see from the above output that the `DruidOpsRequest` has succeeded. If we describe the `DruidOpsRequest` we will get an overview of the steps that were followed to scale the cluster.
+
+```bash
+$ kubectl describe druidopsrequests -n demo druid-hscale-up
+Name: druid-hscale-up
+Namespace: demo
+Labels:
+Annotations:
+API Version: ops.kubedb.com/v1alpha1
+Kind: DruidOpsRequest
+Metadata:
+ Creation Timestamp: 2024-10-21T11:32:51Z
+ Generation: 1
+ Managed Fields:
+ API Version: ops.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:metadata:
+ f:annotations:
+ .:
+ f:kubectl.kubernetes.io/last-applied-configuration:
+ f:spec:
+ .:
+ f:apply:
+ f:databaseRef:
+ f:horizontalScaling:
+ .:
+ f:topology:
+ .:
+ f:coordinators:
+ f:historicals:
+ f:type:
+ Manager: kubectl-client-side-apply
+ Operation: Update
+ Time: 2024-10-21T11:32:51Z
+ API Version: ops.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:status:
+ .:
+ f:conditions:
+ f:observedGeneration:
+ f:phase:
+ Manager: kubedb-ops-manager
+ Operation: Update
+ Subresource: status
+ Time: 2024-10-21T11:34:02Z
+ Resource Version: 91877
+ UID: 824356ca-eafc-4266-8af1-c372b27f6ce7
+Spec:
+ Apply: IfReady
+ Database Ref:
+ Name: druid-cluster
+ Horizontal Scaling:
+ Topology:
+ Coordinators: 2
+ Historicals: 2
+ Type: HorizontalScaling
+Status:
+ Conditions:
+ Last Transition Time: 2024-10-21T11:32:51Z
+ Message: Druid ops-request has started to horizontally scaling the nodes
+ Observed Generation: 1
+ Reason: HorizontalScaling
+ Status: True
+ Type: HorizontalScaling
+ Last Transition Time: 2024-10-21T11:33:17Z
+ Message: Successfully Scaled Up Broker
+ Observed Generation: 1
+ Reason: ScaleUpCoordinators
+ Status: True
+ Type: ScaleUpCoordinators
+ Last Transition Time: 2024-10-21T11:33:02Z
+ Message: patch pet setdruid-cluster-coordinators; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: PatchPetSetdruid-cluster-coordinators
+ Last Transition Time: 2024-10-21T11:33:57Z
+ Message: node in cluster; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: NodeInCluster
+ Last Transition Time: 2024-10-21T11:34:02Z
+ Message: Successfully Scaled Up Broker
+ Observed Generation: 1
+ Reason: ScaleUpHistoricals
+ Status: True
+ Type: ScaleUpHistoricals
+ Last Transition Time: 2024-10-21T11:33:22Z
+ Message: patch pet setdruid-cluster-historicals; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: PatchPetSetdruid-cluster-historicals
+ Last Transition Time: 2024-10-21T11:34:02Z
+ Message: Successfully completed horizontally scale druid cluster
+ Observed Generation: 1
+ Reason: Successful
+ Status: True
+ Type: Successful
+ Observed Generation: 1
+ Phase: Successful
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Starting 95s KubeDB Ops-manager Operator Start processing for DruidOpsRequest: demo/druid-hscale-up
+ Normal Starting 95s KubeDB Ops-manager Operator Pausing Druid databse: demo/druid-cluster
+ Normal Successful 95s KubeDB Ops-manager Operator Successfully paused Druid database: demo/druid-cluster for DruidOpsRequest: druid-hscale-up
+ Warning patch pet setdruid-cluster-coordinators; ConditionStatus:True 84s KubeDB Ops-manager Operator patch pet setdruid-cluster-coordinators; ConditionStatus:True
+ Warning node in cluster; ConditionStatus:False 76s KubeDB Ops-manager Operator node in cluster; ConditionStatus:False
+ Warning node in cluster; ConditionStatus:True 74s KubeDB Ops-manager Operator node in cluster; ConditionStatus:True
+ Normal ScaleUpCoordinators 69s KubeDB Ops-manager Operator Successfully Scaled Up Broker
+ Warning patch pet setdruid-cluster-historicals; ConditionStatus:True 64s KubeDB Ops-manager Operator patch pet setdruid-cluster-historicals; ConditionStatus:True
+ Warning node in cluster; ConditionStatus:False 56s KubeDB Ops-manager Operator node in cluster; ConditionStatus:False
+ Warning node in cluster; ConditionStatus:True 29s KubeDB Ops-manager Operator node in cluster; ConditionStatus:True
+ Normal ScaleUpHistoricals 24s KubeDB Ops-manager Operator Successfully Scaled Up Broker
+ Normal Starting 24s KubeDB Ops-manager Operator Resuming Druid database: demo/druid-cluster
+ Normal Successful 24s KubeDB Ops-manager Operator Successfully resumed Druid database: demo/druid-cluster for DruidOpsRequest: druid-hscale-up
+```
+
+
+Now, we are going to verify the number of replicas this cluster has from the Druid object, number of pods the petset have,
+
+**Coordinators Replicas**
+
+```bash
+$ kubectl get druid -n demo druid-cluster -o json | jq '.spec.topology.coordinators.replicas'
+2
+
+$ kubectl get petset -n demo druid-cluster-coordinators -o json | jq '.spec.replicas'
+2
+```
+
+**Historicals Replicas**
+
+```bash
+$ kubectl get druid -n demo druid-cluster -o json | jq '.spec.topology.historicals.replicas'
+2
+
+$ kubectl get petset -n demo druid-cluster-historicals -o json | jq '.spec.replicas'
+2
+```
+
+Now, we are going to verify the number of replicas this cluster has from the Druid UI.
+
+### Verify Replica Count from Druid UI
+
+Verify the scaled replica count of nodes from the druid ui. To access the UI follow the steps described in the first part of this guide. [(Check Replica Count from Druid UI)](/docs/guides/druid/scaling/horizontal-scaling/#Check-Replica-Count-from-Druid-UI)
+
+If you follow the steps properly, you should be able to see that the replica count of both `coordinators` and `historicals` has become 2. Also as the `coordinators` is serving as the `overlords`, the count of `overlords` has also become 2.
+
+
+
+
+
+## Scale Down Replicas
+
+Here, we are going to scale down the replicas of the druid topology cluster to meet the desired number of replicas after scaling.
+
+### Create DruidOpsRequest
+
+In order to scale down the replicas of the druid topology cluster, we have to create a `DruidOpsRequest` CR with our desired replicas. Below is the YAML of the `DruidOpsRequest` CR that we are going to create,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: druid-hscale-down
+ namespace: demo
+spec:
+ type: HorizontalScaling
+ databaseRef:
+ name: druid-cluster
+ horizontalScaling:
+ topology:
+ coordinators: 1
+ historicals: 1
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing horizontal scaling down operation on `druid-cluster` cluster.
+- `spec.type` specifies that we are performing `HorizontalScaling` on druid.
+- `spec.horizontalScaling.topology.coordinators` specifies the desired replicas after scaling for the coordinators nodes.
+- `spec.horizontalScaling.topology.historicals` specifies the desired replicas after scaling for the historicals nodes.
+
+> **Note:** Similarly you can scale other druid nodes by specifying the following fields:
+> - For `overlords` use `spec.horizontalScaling.topology.overlords`.
+> - For `brokers` use `spec.horizontalScaling.topology.brokers`.
+> - For `middleManagers` use `spec.horizontalScaling.topology.middleManagers`.
+> - For `routers` use `spec.horizontalScaling.topology.routers`.
+
+Let's create the `DruidOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/druid/scaling/horizontal-scaling/druid-hscale-down-topology.yaml
+druidopsrequest.ops.kubedb.com/druid-hscale-down created
+```
+
+#### Verify Topology cluster replicas scaled down successfully
+
+If everything goes well, `KubeDB` Ops-manager operator will update the replicas of `Druid` object and related `PetSets` and `Pods`.
+
+Let's wait for `DruidOpsRequest` to be `Successful`. Run the following command to watch `DruidOpsRequest` CR,
+
+```bash
+$ watch kubectl get druidopsrequest -n demo
+NAME TYPE STATUS AGE
+druid-hscale-down HorizontalScaling Successful 2m32s
+```
+
+We can see from the above output that the `DruidOpsRequest` has succeeded. If we describe the `DruidOpsRequest` we will get an overview of the steps that were followed to scale the cluster.
+
+```bash
+$ kubectl get druidopsrequest -n demo druid-hscale-down -oyaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ annotations:
+ kubectl.kubernetes.io/last-applied-configuration: |
+ {"apiVersion":"ops.kubedb.com/v1alpha1","kind":"DruidOpsRequest","metadata":{"annotations":{},"name":"druid-hscale-down","namespace":"demo"},"spec":{"databaseRef":{"name":"druid-cluster"},"horizontalScaling":{"topology":{"coordinators":1,"historicals":1}},"type":"HorizontalScaling"}}
+ creationTimestamp: "2024-10-21T12:42:09Z"
+ generation: 1
+ name: druid-hscale-down
+ namespace: demo
+ resourceVersion: "99500"
+ uid: b3a81d07-be44-4adf-a8a7-36bb825f26a8
+spec:
+ apply: IfReady
+ databaseRef:
+ name: druid-cluster
+ horizontalScaling:
+ topology:
+ coordinators: 1
+ historicals: 1
+ type: HorizontalScaling
+status:
+ conditions:
+ - lastTransitionTime: "2024-10-21T12:42:09Z"
+ message: Druid ops-request has started to horizontally scaling the nodes
+ observedGeneration: 1
+ reason: HorizontalScaling
+ status: "True"
+ type: HorizontalScaling
+ - lastTransitionTime: "2024-10-21T12:42:33Z"
+ message: Successfully Scaled Down Broker
+ observedGeneration: 1
+ reason: ScaleDownCoordinators
+ status: "True"
+ type: ScaleDownCoordinators
+ - lastTransitionTime: "2024-10-21T12:42:23Z"
+ message: reassign partitions; ConditionStatus:True
+ observedGeneration: 1
+ status: "True"
+ type: ReassignPartitions
+ - lastTransitionTime: "2024-10-21T12:42:23Z"
+ message: is pet set patched; ConditionStatus:True
+ observedGeneration: 1
+ status: "True"
+ type: IsPetSetPatched
+ - lastTransitionTime: "2024-10-21T12:42:28Z"
+ message: get pod; ConditionStatus:True
+ observedGeneration: 1
+ status: "True"
+ type: GetPod
+ - lastTransitionTime: "2024-10-21T12:42:53Z"
+ message: Successfully Scaled Down Broker
+ observedGeneration: 1
+ reason: ScaleDownHistoricals
+ status: "True"
+ type: ScaleDownHistoricals
+ - lastTransitionTime: "2024-10-21T12:42:43Z"
+ message: delete pvc; ConditionStatus:True
+ observedGeneration: 1
+ status: "True"
+ type: DeletePvc
+ - lastTransitionTime: "2024-10-21T12:42:43Z"
+ message: get pvc; ConditionStatus:False
+ observedGeneration: 1
+ status: "False"
+ type: GetPvc
+ - lastTransitionTime: "2024-10-21T12:42:53Z"
+ message: Successfully completed horizontally scale druid cluster
+ observedGeneration: 1
+ reason: Successful
+ status: "True"
+ type: Successful
+ observedGeneration: 1
+ phase: Successful
+```
+
+Now, we are going to verify the number of replicas this cluster has from the Druid object, number of pods the petset have,
+
+**Coordinators Replicas**
+
+```bash
+$ kubectl get druid -n demo druid-cluster -o json | jq '.spec.topology.coordinators.replicas'
+1
+
+$ kubectl get petset -n demo druid-cluster-coordinators -o json | jq '.spec.replicas'
+1
+```
+
+**Historicals Replicas**
+
+```bash
+$ kubectl get druid -n demo druid-cluster -o json | jq '.spec.topology.historicals.replicas'
+1
+
+$ kubectl get petset -n demo druid-cluster-historicals -o json | jq '.spec.replicas'
+1
+```
+
+Now, we are going to verify the number of replicas this cluster has from the Druid UI.
+
+### Verify Replica Count from Druid UI
+
+Verify the scaled replica count of nodes from the druid ui. To access the UI follow the steps described in the first part of this guide. [(Check Replica Count from Druid UI)](/docs/guides/druid/scaling/horizontal-scaling/#Check-Replica-Count-from-Druid-UI)
+
+If you follow the steps properly, you should be able to see that the replica count of both `coordinators` and `historicals` has become 1. Also as the `coordinators` is serving as the `overlords`, the count of `overlords` has also become 1.
+
+
+
+
+
+
+## Cleaning Up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+```bash
+kubectl delete dr -n demo druid-cluster
+kubectl delete druidopsrequest -n demo druid-hscale-up druid-hscale-down
+kubectl delete ns demo
+```
+
+## Next Steps
+
+- Detail concepts of [Druid object](/docs/guides/druid/concepts/druid.md).
+- Different Druid topology clustering modes [here](/docs/guides/druid/clustering/_index.md).
+- Monitor your Druid with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/druid/monitoring/using-prometheus-operator.md).
+
+[//]: # (- Monitor your Druid with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/druid/monitoring/using-builtin-prometheus.md).)
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
diff --git a/docs/guides/druid/scaling/horizontal-scaling/images/dr-horizontal-scaling.png b/docs/guides/druid/scaling/horizontal-scaling/images/dr-horizontal-scaling.png
new file mode 100644
index 0000000000..83615ee58f
Binary files /dev/null and b/docs/guides/druid/scaling/horizontal-scaling/images/dr-horizontal-scaling.png differ
diff --git a/docs/guides/druid/scaling/horizontal-scaling/images/druid-ui-scaled-up.png b/docs/guides/druid/scaling/horizontal-scaling/images/druid-ui-scaled-up.png
new file mode 100644
index 0000000000..f9369cdc0d
Binary files /dev/null and b/docs/guides/druid/scaling/horizontal-scaling/images/druid-ui-scaled-up.png differ
diff --git a/docs/guides/druid/scaling/horizontal-scaling/images/druid-ui.png b/docs/guides/druid/scaling/horizontal-scaling/images/druid-ui.png
new file mode 100644
index 0000000000..f81925c59c
Binary files /dev/null and b/docs/guides/druid/scaling/horizontal-scaling/images/druid-ui.png differ
diff --git a/docs/guides/druid/scaling/horizontal-scaling/overview.md b/docs/guides/druid/scaling/horizontal-scaling/overview.md
new file mode 100644
index 0000000000..7158e2432a
--- /dev/null
+++ b/docs/guides/druid/scaling/horizontal-scaling/overview.md
@@ -0,0 +1,54 @@
+---
+title: Druid Horizontal Scaling Overview
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-scaling-horizontal-scaling-overview
+ name: Overview
+ parent: guides-druid-scaling-horizontal-scaling
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Druid Horizontal Scaling
+
+This guide will give an overview on how KubeDB Ops-manager operator scales up or down `Druid` cluster replicas of various component such as Coordinators, Overlords, Historicals, MiddleManager, Brokers and Routers.
+
+## Before You Begin
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [Druid](/docs/guides/druid/concepts/druid.md)
+ - [DruidOpsRequest](/docs/guides/druid/concepts/druidopsrequest.md)
+
+## How Horizontal Scaling Process Works
+
+The following diagram shows how KubeDB Ops-manager operator scales up or down `Druid` database components. Open the image in a new tab to see the enlarged version.
+
+
+
+The Horizontal scaling process consists of the following steps:
+
+1. At first, a user creates a `Druid` Custom Resource (CR).
+
+2. `KubeDB` Provisioner operator watches the `Druid` CR.
+
+3. When the operator finds a `Druid` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc.
+
+4. Then, in order to scale the various components (i.e. Coordinators, Overlords, Historicals, MiddleManagers, Brokers, Routers) of the `Druid` cluster, the user creates a `DruidOpsRequest` CR with desired information.
+
+5. `KubeDB` Ops-manager operator watches the `DruidOpsRequest` CR.
+
+6. When it finds a `DruidOpsRequest` CR, it halts the `Druid` object which is referred from the `DruidOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `Druid` object during the horizontal scaling process.
+
+7. Then the `KubeDB` Ops-manager operator will scale the related PetSet Pods to reach the expected number of replicas defined in the `DruidOpsRequest` CR.
+
+8. After the successfully scaling the replicas of the related PetSet Pods, the `KubeDB` Ops-manager operator updates the number of replicas in the `Druid` object to reflect the updated state.
+
+9. After the successful scaling of the `Druid` replicas, the `KubeDB` Ops-manager operator resumes the `Druid` object so that the `KubeDB` Provisioner operator resumes its usual operations.
+
+In the next docs, we are going to show a step-by-step guide on horizontal scaling of Druid cluster using `DruidOpsRequest` CRD.
\ No newline at end of file
diff --git a/docs/guides/druid/scaling/horizontal-scaling/yamls/deep-storage-config.yaml b/docs/guides/druid/scaling/horizontal-scaling/yamls/deep-storage-config.yaml
new file mode 100644
index 0000000000..3612595828
--- /dev/null
+++ b/docs/guides/druid/scaling/horizontal-scaling/yamls/deep-storage-config.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+
diff --git a/docs/guides/druid/scaling/horizontal-scaling/yamls/druid-cluster.yaml b/docs/guides/druid/scaling/horizontal-scaling/yamls/druid-cluster.yaml
new file mode 100644
index 0000000000..6351c2ddda
--- /dev/null
+++ b/docs/guides/druid/scaling/horizontal-scaling/yamls/druid-cluster.yaml
@@ -0,0 +1,16 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ deletionPolicy: Delete
+
diff --git a/docs/guides/druid/scaling/horizontal-scaling/yamls/druid-hscale-down.yaml b/docs/guides/druid/scaling/horizontal-scaling/yamls/druid-hscale-down.yaml
new file mode 100644
index 0000000000..4cfa3f715d
--- /dev/null
+++ b/docs/guides/druid/scaling/horizontal-scaling/yamls/druid-hscale-down.yaml
@@ -0,0 +1,13 @@
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: druid-hscale-down
+ namespace: demo
+spec:
+ type: HorizontalScaling
+ databaseRef:
+ name: druid-cluster
+ horizontalScaling:
+ topology:
+ coordinators: 1
+ historicals: 1
\ No newline at end of file
diff --git a/docs/guides/druid/scaling/horizontal-scaling/yamls/druid-hscale-up.yaml b/docs/guides/druid/scaling/horizontal-scaling/yamls/druid-hscale-up.yaml
new file mode 100644
index 0000000000..8063b37fb1
--- /dev/null
+++ b/docs/guides/druid/scaling/horizontal-scaling/yamls/druid-hscale-up.yaml
@@ -0,0 +1,13 @@
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: druid-hscale-up
+ namespace: demo
+spec:
+ type: HorizontalScaling
+ databaseRef:
+ name: druid-cluster
+ horizontalScaling:
+ topology:
+ coordinators: 2
+ historicals: 2
\ No newline at end of file
diff --git a/docs/guides/druid/scaling/vertical-scaling/_index.md b/docs/guides/druid/scaling/vertical-scaling/_index.md
new file mode 100644
index 0000000000..8a9a5727c0
--- /dev/null
+++ b/docs/guides/druid/scaling/vertical-scaling/_index.md
@@ -0,0 +1,10 @@
+---
+title: Vertical Scaling
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-scaling-vertical-scaling
+ name: Vertical Scaling
+ parent: guides-druid-scaling
+ weight: 20
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/druid/scaling/vertical-scaling/guide.md b/docs/guides/druid/scaling/vertical-scaling/guide.md
new file mode 100644
index 0000000000..cedf35b076
--- /dev/null
+++ b/docs/guides/druid/scaling/vertical-scaling/guide.md
@@ -0,0 +1,454 @@
+---
+title: Vertical Scaling Druid Cluster
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-scaling-vertical-scaling-guide
+ name: Druid Vertical Scaling
+ parent: guides-druid-scaling-vertical-scaling
+ weight: 30
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Vertical Scale Druid Topology Cluster
+
+This guide will show you how to use `KubeDB` Ops-manager operator to update the resources of a Druid topology cluster.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [Druid](/docs/guides/druid/concepts/druid.md)
+ - [Topology](/docs/guides/druid/clustering/overview/index.md)
+ - [DruidOpsRequest](/docs/guides/druid/concepts/druidopsrequest.md)
+ - [Vertical Scaling Overview](/docs/guides/druid/scaling/vertical-scaling/overview.md)
+
+To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial.
+
+```bash
+$ kubectl create ns demo
+namespace/demo created
+```
+
+> **Note:** YAML files used in this tutorial are stored in [docs/examples/druid](/docs/examples/druid) directory of [kubedb/docs](https://github.com/kubedb/docs) repository.
+
+## Apply Vertical Scaling on Topology Cluster
+
+Here, we are going to deploy a `Druid` topology cluster using a supported version by `KubeDB` operator. Then we are going to apply vertical scaling on it.
+
+### Prepare Druid Topology Cluster
+
+Now, we are going to deploy a `Druid` topology cluster database with version `28.0.1`.
+
+### Create External Dependency (Deep Storage)
+
+Before proceeding further, we need to prepare deep storage, which is one of the external dependency of Druid and used for storing the segments. It is a storage mechanism that Apache Druid does not provide. **Amazon S3**, **Google Cloud Storage**, or **Azure Blob Storage**, **S3-compatible storage** (like **Minio**), or **HDFS** are generally convenient options for deep storage.
+
+In this tutorial, we will run a `minio-server` as deep storage in our local `kind` cluster using `minio-operator` and create a bucket named `druid` in it, which the deployed druid database will use.
+
+```bash
+
+$ helm repo add minio https://operator.min.io/
+$ helm repo update minio
+$ helm upgrade --install --namespace "minio-operator" --create-namespace "minio-operator" minio/operator --set operator.replicaCount=1
+
+$ helm upgrade --install --namespace "demo" --create-namespace druid-minio minio/tenant \
+--set tenant.pools[0].servers=1 \
+--set tenant.pools[0].volumesPerServer=1 \
+--set tenant.pools[0].size=1Gi \
+--set tenant.certificate.requestAutoCert=false \
+--set tenant.buckets[0].name="druid" \
+--set tenant.pools[0].name="default"
+
+```
+
+Now we need to create a `Secret` named `deep-storage-config`. It contains the necessary connection information using which the druid database will connect to the deep storage.
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+```
+
+Let’s create the `deep-storage-config` Secret shown above:
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/scaling/vertical-scaling/yamls/deep-storage-config.yaml
+secret/deep-storage-config created
+```
+
+### Deploy Druid Cluster
+
+In this section, we are going to deploy a Druid topology cluster. Then, in the next section we will update the resources of the database using `DruidOpsRequest` CRD. Below is the YAML of the `Druid` CR that we are going to create,
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ deletionPolicy: Delete
+```
+
+Let's create the `Druid` CR we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/druid/scaling/vertical-scaling/yamls/druid-cluster.yaml
+druid.kubedb.com/druid-cluster created
+```
+
+Now, wait until `druid-cluster` has status `Ready`. i.e,
+
+```bash
+$ kubectl get dr -n demo -w
+NAME TYPE VERSION STATUS AGE
+druid-cluster kubedb.com/v1aplha2 28.0.1 Provisioning 0s
+druid-cluster kubedb.com/v1aplha2 28.0.1 Provisioning 24s
+.
+.
+druid-cluster kubedb.com/v1aplha2 28.0.1 Ready 92s
+```
+
+Let's check the Pod containers resources for both `coordinators` and `historicals` of the Druid topology cluster. Run the following command to get the resources of the `coordinators` and `historicals` containers of the Druid topology cluster
+
+```bash
+$ kubectl get pod -n demo druid-cluster-coordinators-0 -o json | jq '.spec.containers[].resources'
+{
+ "limits": {
+ "memory": "1Gi"
+ },
+ "requests": {
+ "cpu": "500m",
+ "memory": "1Gi"
+ }
+}
+```
+
+```bash
+$ kubectl get pod -n demo druid-cluster-historicals-0 -o json | jq '.spec.containers[].resources'
+{
+ "limits": {
+ "memory": "1Gi"
+ },
+ "requests": {
+ "cpu": "500m",
+ "memory": "1Gi"
+ }
+}
+```
+This is the default resources of the Druid topology cluster set by the `KubeDB` operator.
+
+We are now ready to apply the `DruidOpsRequest` CR to update the resources of this database.
+
+### Vertical Scaling
+
+Here, we are going to update the resources of the topology cluster to meet the desired resources after scaling.
+
+#### Create DruidOpsRequest
+
+In order to update the resources of the database, we have to create a `DruidOpsRequest` CR with our desired resources. Below is the YAML of the `DruidOpsRequest` CR that we are going to create,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: druid-vscale
+ namespace: demo
+spec:
+ type: VerticalScaling
+ databaseRef:
+ name: druid-cluster
+ verticalScaling:
+ coordinators:
+ resources:
+ requests:
+ memory: "1.2Gi"
+ cpu: "0.6"
+ limits:
+ memory: "1.2Gi"
+ cpu: "0.6"
+ historicals:
+ resources:
+ requests:
+ memory: "1.1Gi"
+ cpu: "0.6"
+ limits:
+ memory: "1.1Gi"
+ cpu: "0.6"
+ timeout: 5m
+ apply: IfReady
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `druid-cluster` cluster.
+- `spec.type` specifies that we are performing `VerticalScaling` on druid.
+- `spec.VerticalScaling.coordinators` specifies the desired resources of `coordinators` node after scaling.
+- `spec.VerticalScaling.historicals` specifies the desired resources of `historicals` node after scaling.
+
+> **Note:** Similarly you can scale other druid nodes vertically by specifying the following fields:
+> - For `overlords` use `spec.verticalScaling.overlords`.
+> - For `brokers` use `spec.verticalScaling.brokers`.
+> - For `middleManagers` use `spec.verticalScaling.middleManagers`.
+> - For `routers` use `spec.verticalScaling.routers`.
+
+Let's create the `DruidOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/scaling/vertical-scaling/yamls/druid-vscale.yaml
+druidopsrequest.ops.kubedb.com/druid-vscale created
+```
+
+#### Verify Druid cluster resources have been updated successfully
+
+If everything goes well, `KubeDB` Ops-manager operator will update the resources of `Druid` object and related `PetSets` and `Pods`.
+
+Let's wait for `DruidOpsRequest` to be `Successful`. Run the following command to watch `DruidOpsRequest` CR,
+
+```bash
+$ kubectl get druidopsrequest -n demo
+NAME TYPE STATUS AGE
+druid-vscale VerticalScaling Successful 3m56s
+```
+
+We can see from the above output that the `DruidOpsRequest` has succeeded. If we describe the `DruidOpsRequest` we will get an overview of the steps that were followed to scale the cluster.
+
+```bash
+$ kubectl describe druidopsrequest -n demo druid-vscale
+Name: druid-vscale
+Namespace: demo
+Labels:
+Annotations:
+API Version: ops.kubedb.com/v1alpha1
+Kind: DruidOpsRequest
+Metadata:
+ Creation Timestamp: 2024-10-21T12:53:55Z
+ Generation: 1
+ Managed Fields:
+ API Version: ops.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:metadata:
+ f:annotations:
+ .:
+ f:kubectl.kubernetes.io/last-applied-configuration:
+ f:spec:
+ .:
+ f:apply:
+ f:databaseRef:
+ f:timeout:
+ f:type:
+ f:verticalScaling:
+ .:
+ f:coordinators:
+ .:
+ f:resources:
+ .:
+ f:limits:
+ .:
+ f:cpu:
+ f:memory:
+ f:requests:
+ .:
+ f:cpu:
+ f:memory:
+ f:historicals:
+ .:
+ f:resources:
+ .:
+ f:limits:
+ .:
+ f:cpu:
+ f:memory:
+ f:requests:
+ .:
+ f:cpu:
+ f:memory:
+ Manager: kubectl-client-side-apply
+ Operation: Update
+ Time: 2024-10-21T12:53:55Z
+ API Version: ops.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:status:
+ .:
+ f:conditions:
+ f:observedGeneration:
+ f:phase:
+ Manager: kubedb-ops-manager
+ Operation: Update
+ Subresource: status
+ Time: 2024-10-21T12:54:23Z
+ Resource Version: 102002
+ UID: fe8bb22f-02e8-4a10-9a78-fc211371d581
+Spec:
+ Apply: IfReady
+ Database Ref:
+ Name: druid-cluster
+ Timeout: 5m
+ Type: VerticalScaling
+ Vertical Scaling:
+ Coordinators:
+ Resources:
+ Limits:
+ Cpu: 0.6
+ Memory: 1.2Gi
+ Requests:
+ Cpu: 0.6
+ Memory: 1.2Gi
+ Historicals:
+ Resources:
+ Limits:
+ Cpu: 0.6
+ Memory: 1.1Gi
+ Requests:
+ Cpu: 0.6
+ Memory: 1.1Gi
+Status:
+ Conditions:
+ Last Transition Time: 2024-10-21T12:53:55Z
+ Message: Druid ops-request has started to vertically scale the Druid nodes
+ Observed Generation: 1
+ Reason: VerticalScaling
+ Status: True
+ Type: VerticalScaling
+ Last Transition Time: 2024-10-21T12:53:58Z
+ Message: Successfully updated PetSets Resources
+ Observed Generation: 1
+ Reason: UpdatePetSets
+ Status: True
+ Type: UpdatePetSets
+ Last Transition Time: 2024-10-21T12:54:23Z
+ Message: Successfully Restarted Pods With Resources
+ Observed Generation: 1
+ Reason: RestartPods
+ Status: True
+ Type: RestartPods
+ Last Transition Time: 2024-10-21T12:54:03Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-coordinators-0
+ Last Transition Time: 2024-10-21T12:54:03Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-coordinators-0
+ Last Transition Time: 2024-10-21T12:54:08Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-coordinators-0
+ Last Transition Time: 2024-10-21T12:54:13Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-historicals-0
+ Last Transition Time: 2024-10-21T12:54:13Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-historicals-0
+ Last Transition Time: 2024-10-21T12:54:18Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-historicals-0
+ Last Transition Time: 2024-10-21T12:54:23Z
+ Message: Successfully completed the vertical scaling for RabbitMQ
+ Observed Generation: 1
+ Reason: Successful
+ Status: True
+ Type: Successful
+ Observed Generation: 1
+ Phase: Successful
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Starting 67s KubeDB Ops-manager Operator Start processing for DruidOpsRequest: demo/druid-vscale
+ Normal Starting 67s KubeDB Ops-manager Operator Pausing Druid databse: demo/druid-cluster
+ Normal Successful 67s KubeDB Ops-manager Operator Successfully paused Druid database: demo/druid-cluster for DruidOpsRequest: druid-vscale
+ Normal UpdatePetSets 64s KubeDB Ops-manager Operator Successfully updated PetSets Resources
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0 59s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0 59s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-coordinators-0 54s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-historicals-0 49s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-historicals-0 49s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-historicals-0 44s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Normal RestartPods 39s KubeDB Ops-manager Operator Successfully Restarted Pods With Resources
+ Normal Starting 39s KubeDB Ops-manager Operator Resuming Druid database: demo/druid-cluster
+ Normal Successful 39s KubeDB Ops-manager Operator Successfully resumed Druid database: demo/druid-cluster for DruidOpsRequest: druid-vscale
+```
+Now, we are going to verify from one of the Pod yaml whether the resources of the topology cluster has updated to meet up the desired state, Let's check,
+
+```bash
+$ kubectl get pod -n demo druid-cluster-coordinators-0 -o json | jq '.spec.containers[].resources'
+{
+ "limits": {
+ "cpu": "600m",
+ "memory": "1288490188800m"
+ },
+ "requests": {
+ "cpu": "600m",
+ "memory": "1288490188800m"
+ }
+}
+$ kubectl get pod -n demo druid-cluster-historicals-1 -o json | jq '.spec.containers[].resources'
+{
+ "limits": {
+ "cpu": "600m",
+ "memory": "1181116006400m"
+ },
+ "requests": {
+ "cpu": "600m",
+ "memory": "1181116006400m"
+ }
+}
+```
+
+The above output verifies that we have successfully scaled up the resources of the Druid topology cluster.
+
+## Cleaning Up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+```bash
+kubectl delete dr -n demo druid-cluster
+kubectl delete druidopsrequest -n demo druid-vscale
+kubectl delete ns demo
+```
+
+## Next Steps
+
+- Detail concepts of [Druid object](/docs/guides/druid/concepts/druid.md).
+- Different Druid topology clustering modes [here](/docs/guides/druid/clustering/_index.md).
+- Monitor your Druid database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/druid/monitoring/using-prometheus-operator.md).
+
+[//]: # (- Monitor your Druid database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/druid/monitoring/using-builtin-prometheus.md).)
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
diff --git a/docs/guides/druid/scaling/vertical-scaling/images/dr-vertical-scaling.png b/docs/guides/druid/scaling/vertical-scaling/images/dr-vertical-scaling.png
new file mode 100644
index 0000000000..552bb0fb30
Binary files /dev/null and b/docs/guides/druid/scaling/vertical-scaling/images/dr-vertical-scaling.png differ
diff --git a/docs/guides/druid/scaling/vertical-scaling/overview.md b/docs/guides/druid/scaling/vertical-scaling/overview.md
new file mode 100644
index 0000000000..2ddd690601
--- /dev/null
+++ b/docs/guides/druid/scaling/vertical-scaling/overview.md
@@ -0,0 +1,54 @@
+---
+title: Druid Vertical Scaling Overview
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-scaling-vertical-scaling-overview
+ name: Overview
+ parent: guides-druid-scaling-vertical-scaling
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Druid Vertical Scaling
+
+This guide will give an overview on how KubeDB Ops-manager operator updates the resources(for example CPU and Memory etc.) of the `Druid`.
+
+## Before You Begin
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [Druid](/docs/guides/kafka/concepts/kafka.md)
+ - [DruidOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md)
+
+## How Vertical Scaling Process Works
+
+The following diagram shows how KubeDB Ops-manager operator updates the resources of the `Druid`. Open the image in a new tab to see the enlarged version.
+
+
+
+The vertical scaling process consists of the following steps:
+
+1. At first, a user creates a `Druid` Custom Resource (CR).
+
+2. `KubeDB` Provisioner operator watches the `Druid` CR.
+
+3. When the operator finds a `Druid` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc.
+
+4. Then, in order to update the resources(for example `CPU`, `Memory` etc.) of the `Druid` cluster, the user creates a `DruidOpsRequest` CR with desired information.
+
+5. `KubeDB` Ops-manager operator watches the `DruidOpsRequest` CR.
+
+6. When it finds a `DruidOpsRequest` CR, it halts the `Druid` object which is referred from the `DruidOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `Druid` object during the vertical scaling process.
+
+7. Then the `KubeDB` Ops-manager operator will update resources of the PetSet Pods to reach desired state.
+
+8. After the successful update of the resources of the PetSet's replica, the `KubeDB` Ops-manager operator updates the `Druid` object to reflect the updated state.
+
+9. After the successful update of the `Druid` resources, the `KubeDB` Ops-manager operator resumes the `Druid` object so that the `KubeDB` Provisioner operator resumes its usual operations.
+
+In the next docs, we are going to show a step by step guide on updating resources of Druid database using `DruidOpsRequest` CRD.
\ No newline at end of file
diff --git a/docs/guides/druid/scaling/vertical-scaling/yamls/deep-storage-config.yaml b/docs/guides/druid/scaling/vertical-scaling/yamls/deep-storage-config.yaml
new file mode 100644
index 0000000000..3612595828
--- /dev/null
+++ b/docs/guides/druid/scaling/vertical-scaling/yamls/deep-storage-config.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+
diff --git a/docs/guides/druid/scaling/vertical-scaling/yamls/druid-cluster.yaml b/docs/guides/druid/scaling/vertical-scaling/yamls/druid-cluster.yaml
new file mode 100644
index 0000000000..7a89d0dc91
--- /dev/null
+++ b/docs/guides/druid/scaling/vertical-scaling/yamls/druid-cluster.yaml
@@ -0,0 +1,15 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ deletionPolicy: Delete
diff --git a/docs/guides/druid/scaling/vertical-scaling/yamls/druid-vscale.yaml b/docs/guides/druid/scaling/vertical-scaling/yamls/druid-vscale.yaml
new file mode 100644
index 0000000000..38cf25d3ca
--- /dev/null
+++ b/docs/guides/druid/scaling/vertical-scaling/yamls/druid-vscale.yaml
@@ -0,0 +1,28 @@
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: druid-vscale
+ namespace: demo
+spec:
+ type: VerticalScaling
+ databaseRef:
+ name: druid-cluster
+ verticalScaling:
+ coordinators:
+ resources:
+ requests:
+ memory: "1.2Gi"
+ cpu: "0.6"
+ limits:
+ memory: "1.2Gi"
+ cpu: "0.6"
+ historicals:
+ resources:
+ requests:
+ memory: "1.1Gi"
+ cpu: "0.6"
+ limits:
+ memory: "1.1Gi"
+ cpu: "0.6"
+ timeout: 5m
+ apply: IfReady
diff --git a/docs/guides/druid/tls/_index.md b/docs/guides/druid/tls/_index.md
new file mode 100755
index 0000000000..2bf445ceea
--- /dev/null
+++ b/docs/guides/druid/tls/_index.md
@@ -0,0 +1,10 @@
+---
+title: Run Druid with TLS
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-tls
+ name: TLS/SSL Encryption
+ parent: guides-druid
+ weight: 90
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/druid/tls/guide.md b/docs/guides/druid/tls/guide.md
new file mode 100644
index 0000000000..ead99ecbd5
--- /dev/null
+++ b/docs/guides/druid/tls/guide.md
@@ -0,0 +1,307 @@
+---
+title: Druid Combined TLS/SSL Encryption
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-tls-guide
+ name: Druid TLS/SSL
+ parent: guides-druid-tls
+ weight: 30
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Run Druid with TLS/SSL (Transport Encryption)
+
+KubeDB supports providing TLS/SSL encryption for Druid. This tutorial will show you how to use KubeDB to run a Druid cluster with TLS/SSL encryption.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.0.0 or later to your cluster to manage your SSL/TLS certificates.
+
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial.
+
+ ```bash
+ $ kubectl create ns demo
+ namespace/demo created
+ ```
+
+> Note: YAML files used in this tutorial are stored in [docs/examples/druid](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/druid) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
+
+## Overview
+
+KubeDB uses following crd fields to enable SSL/TLS encryption in Druid.
+
+- `spec:`
+ - `enableSSL`
+ - `tls:`
+ - `issuerRef`
+ - `certificate`
+
+Read about the fields in details in [druid concept](/docs/guides/druid/concepts/druid.md),
+
+`tls` is applicable for all types of Druid (i.e., `combined` and `topology`).
+
+Users must specify the `tls.issuerRef` field. KubeDB uses the `issuer` or `clusterIssuer` referenced in the `tls.issuerRef` field, and the certificate specs provided in `tls.certificate` to generate certificate secrets. These certificate secrets are then used to generate required certificates including `ca.crt`, `tls.crt`, `tls.key`, `keystore.jks` and `truststore.jks`.
+
+## Create Issuer/ ClusterIssuer
+
+We are going to create an example `Issuer` that will be used throughout the duration of this tutorial to enable SSL/TLS in Druid. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`.
+
+- Start off by generating you ca certificates using openssl.
+
+```bash
+openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=druid/O=kubedb"
+```
+
+- Now create a ca-secret using the certificate files you have just generated.
+
+```bash
+kubectl create secret tls druid-ca \
+ --cert=ca.crt \
+ --key=ca.key \
+ --namespace=demo
+```
+
+Now, create an `Issuer` using the `ca-secret` you have just created. The `YAML` file looks like this:
+
+```yaml
+apiVersion: cert-manager.io/v1
+kind: Issuer
+metadata:
+ name: druid-ca-issuer
+ namespace: demo
+spec:
+ ca:
+ secretName: druid-ca
+```
+
+Apply the `YAML` file:
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/tls/yamls/druid-ca-issuer.yaml
+issuer.cert-manager.io/druid-ca-issuer created
+```
+
+## TLS/SSL encryption in Druid Cluster
+
+### Create External Dependency (Deep Storage)
+
+Before proceeding further, we need to prepare deep storage, which is one of the external dependency of Druid and used for storing the segments. It is a storage mechanism that Apache Druid does not provide. **Amazon S3**, **Google Cloud Storage**, or **Azure Blob Storage**, **S3-compatible storage** (like **Minio**), or **HDFS** are generally convenient options for deep storage.
+
+In this tutorial, we will run a `minio-server` as deep storage in our local `kind` cluster using `minio-operator` and create a bucket named `druid` in it, which the deployed druid database will use.
+
+```bash
+
+$ helm repo add minio https://operator.min.io/
+$ helm repo update minio
+$ helm upgrade --install --namespace "minio-operator" --create-namespace "minio-operator" minio/operator --set operator.replicaCount=1
+
+$ helm upgrade --install --namespace "demo" --create-namespace druid-minio minio/tenant \
+--set tenant.pools[0].servers=1 \
+--set tenant.pools[0].volumesPerServer=1 \
+--set tenant.pools[0].size=1Gi \
+--set tenant.certificate.requestAutoCert=false \
+--set tenant.buckets[0].name="druid" \
+--set tenant.pools[0].name="default"
+
+```
+
+Now we need to create a `Secret` named `deep-storage-config`. It contains the necessary connection information using which the druid database will connect to the deep storage.
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+```
+
+Let’s create the `deep-storage-config` Secret shown above:
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/tls/yamls/deep-storage-config.yaml
+secret/deep-storage-config created
+```
+
+Now, lets go ahead and create a druid database.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster-tls
+ namespace: demo
+spec:
+ version: 28.0.1
+ enableSSL: true
+ tls:
+ issuerRef:
+ apiGroup: "cert-manager.io"
+ kind: Issuer
+ name: druid-ca-issuer
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ deletionPolicy: Delete
+```
+
+### Deploy Druid Topology Cluster with TLS/SSL
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/tls/yamls/druid-cluster-tls.yaml
+druid.kubedb.com/druid-cluster-tls created
+```
+
+Now, wait until `druid-cluster-tls created` has status `Ready`. i.e,
+
+```bash
+$ kubectl get druid -n demo -w
+
+Every 2.0s: kubectl get druid -n demo aadee: Fri Sep 6 12:34:51 2024
+NAME TYPE VERSION STATUS AGE
+druid-cluster-tls kubedb.com/v1alpha2 28.0.1 Ready 20s
+druid-cluster-tls kubedb.com/v1alpha2 28.0.1 Provisioning 1m
+...
+...
+druid-cluster-tls kubedb.com/v1alpha2 28.0.1 Ready 38m
+```
+
+### Verify TLS/SSL in Druid Cluster
+
+```bash
+$ kubectl describe secret druid-cluster-tls-client-cert -n demo
+Name: druid-cluster-tls-client-cert
+Namespace: demo
+Labels: app.kubernetes.io/component=database
+ app.kubernetes.io/instance=druid-cluster-tls
+ app.kubernetes.io/managed-by=kubedb.com
+ app.kubernetes.io/name=druids.kubedb.com
+ controller.cert-manager.io/fao=true
+Annotations: cert-manager.io/alt-names:
+ *.druid-cluster-tls-pods.demo.svc.cluster.local,druid-cluster-tls-brokers-0.druid-cluster-tls-pods.demo.svc.cluster.local:8282,druid-clust...
+ cert-manager.io/certificate-name: druid-cluster-tls-client-cert
+ cert-manager.io/common-name: druid-cluster-tls-pods.demo.svc
+ cert-manager.io/ip-sans: 127.0.0.1
+ cert-manager.io/issuer-group: cert-manager.io
+ cert-manager.io/issuer-kind: Issuer
+ cert-manager.io/issuer-name: druid-ca-issuer
+ cert-manager.io/uri-sans:
+
+Type: kubernetes.io/tls
+
+Data
+====
+ca.crt: 1147 bytes
+keystore.jks: 3720 bytes
+tls-combined.pem: 3835 bytes
+tls.crt: 2126 bytes
+tls.key: 1708 bytes
+truststore.jks: 865 bytes
+```
+
+Now, Lets exec into a druid coordinators pod and verify the configuration that the TLS is enabled.
+
+```bash
+$ kubectl exec -it -n demo druid-cluster-tls-coordinators-0 -- bash
+Defaulted container "druid" out of: druid, init-druid (init)
+bash-5.1$ cat conf/druid/cluster/_common/common.runtime.properties
+druid.client.https.trustStorePassword={"type": "environment", "variable": "DRUID_KEY_STORE_PASSWORD"}
+druid.client.https.trustStorePath=/opt/druid/ssl/truststore.jks
+druid.client.https.trustStoreType=jks
+druid.emitter=noop
+druid.enablePlaintextPort=false
+druid.enableTlsPort=true
+druid.metadata.mysql.ssl.clientCertificateKeyStorePassword=password
+druid.metadata.mysql.ssl.clientCertificateKeyStoreType=JKS
+druid.metadata.mysql.ssl.clientCertificateKeyStoreUrl=/opt/druid/ssl/metadata/keystore.jks
+druid.metadata.mysql.ssl.useSSL=true
+druid.server.https.certAlias=druid
+druid.server.https.keyStorePassword={"type": "environment", "variable": "DRUID_KEY_STORE_PASSWORD"}
+druid.server.https.keyStorePath=/opt/druid/ssl/keystore.jks
+druid.server.https.keyStoreType=jks
+```
+
+We can see from the above output that, all the TLS related configuration is added. Here the `MySQL` and `ZooKeeper` deployed with Druid is also TLS secure and their connection configs are added as well.
+
+#### Verify TLS/SSL using Druid UI
+
+To check follow the following steps:
+
+Druid uses separate ports for TLS/SSL. While the plaintext port for `routers` node is `8888`. For TLS, it is `9088`. Hence, we will use that port to access the UI.
+
+First port-forward the port `9088` to local machine:
+
+```bash
+$ kubectl port-forward -n demo svc/druid-cluster-tls-routers 9088
+Forwarding from 127.0.0.1:9088 -> 9088
+Forwarding from [::1]:9088 -> 9088
+```
+
+
+Now hit the `https://localhost:9088/` from any browser. Here you may select `Advance` and then `Proceed to localhost (unsafe)` or you can add the `ca.crt` from the secret `druid-cluster-tls-client-cert` to your browser's Authorities.
+
+After that you will be prompted to provide the credential of the druid database. By following the steps discussed below, you can get the credential generated by the KubeDB operator for your Druid database.
+
+**Connection information:**
+
+- Username:
+
+ ```bash
+ $ kubectl get secret -n demo druid-cluster-tls-admin-cred -o jsonpath='{.data.username}' | base64 -d
+ admin
+ ```
+
+- Password:
+
+ ```bash
+ $ kubectl get secret -n demo druid-cluster-tls-admin-cred -o jsonpath='{.data.password}' | base64 -d
+ LzJtVRX5E8MorFaf
+ ```
+
+After providing the credentials correctly, you should be able to access the web console like shown below.
+
+
+
+
+
+From the above output, we can see that the connection is secure.
+
+## Cleaning up
+
+To cleanup the Kubernetes resources created by this tutorial, run:
+
+```bash
+kubectl delete druid -n demo druid-cluster-tls
+kubectl delete issuer -n demo druid-ca-issuer
+kubectl delete ns demo
+```
+
+## Next Steps
+
+- Detail concepts of [Druid object](/docs/guides/druid/concepts/druid.md).
+- Monitor your Druid cluster with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/druid/monitoring/using-prometheus-operator.md).
+- Monitor your Druid cluster with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/druid/monitoring/using-builtin-prometheus.md).
+
+[//]: # (- Use [kubedb cli](/docs/guides/druid/cli/cli.md) to manage databases like kubectl for Kubernetes.)
+- Detail concepts of [Druid object](/docs/guides/druid/concepts/druid.md).
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
diff --git a/docs/guides/druid/tls/images/druid-ui.png b/docs/guides/druid/tls/images/druid-ui.png
new file mode 100644
index 0000000000..9f173c38c2
Binary files /dev/null and b/docs/guides/druid/tls/images/druid-ui.png differ
diff --git a/docs/guides/druid/tls/images/tls.png b/docs/guides/druid/tls/images/tls.png
new file mode 100644
index 0000000000..7f21589742
Binary files /dev/null and b/docs/guides/druid/tls/images/tls.png differ
diff --git a/docs/guides/druid/tls/overview.md b/docs/guides/druid/tls/overview.md
new file mode 100644
index 0000000000..2022e51955
--- /dev/null
+++ b/docs/guides/druid/tls/overview.md
@@ -0,0 +1,70 @@
+---
+title: Druid TLS/SSL Encryption Overview
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-tls-overview
+ name: Overview
+ parent: guides-druid-tls
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Druid TLS/SSL Encryption
+
+**Prerequisite :** To configure TLS/SSL in `Druid`, `KubeDB` uses `cert-manager` to issue certificates. So first you have to make sure that the cluster has `cert-manager` installed. To install `cert-manager` in your cluster following steps [here](https://cert-manager.io/docs/installation/kubernetes/).
+
+To issue a certificate, the following crd of `cert-manager` is used:
+
+- `Issuer/ClusterIssuer`: Issuers, and ClusterIssuers represent certificate authorities (CAs) that are able to generate signed certificates by honoring certificate signing requests. All cert-manager certificates require a referenced issuer that is in a ready condition to attempt to honor the request. You can learn more details [here](https://cert-manager.io/docs/concepts/issuer/).
+
+- `Certificate`: `cert-manager` has the concept of Certificates that define a desired x509 certificate which will be renewed and kept up to date. You can learn more details [here](https://cert-manager.io/docs/concepts/certificate/).
+
+**Druid CRD Specification :**
+
+KubeDB uses following crd fields to enable SSL/TLS encryption in `Druid`.
+
+- `spec:`
+ - `enableSSL`
+ - `tls:`
+ - `issuerRef`
+ - `certificates`
+
+Read about the fields in details from [druid concept](/docs/guides/druid/concepts/druid.md),
+
+When, `enableSSL` is set to `true`, the users must specify the `tls.issuerRef` field. `KubeDB` uses the `issuer` or `clusterIssuer` referenced in the `tls.issuerRef` field, and the certificate specs provided in `tls.certificate` to generate certificate secrets using `Issuer/ClusterIssuers` specification. These certificates secrets including `ca.crt`, `tls.crt` and `tls.key` etc. are used to configure `druid` server and clients.
+
+## How TLS/SSL configures in Druid
+
+The following figure shows how `KubeDB` enterprise used to configure TLS/SSL in Druid. Open the image in a new tab to see the enlarged version.
+
+
+
+Deploying Druid with TLS/SSL configuration process consists of the following steps:
+
+1. At first, a user creates a `Issuer/ClusterIssuer` cr.
+
+2. Then the user creates a `Druid` CR which refers to the `Issuer/ClusterIssuer` CR that the user created in the previous step.
+
+3. `KubeDB` Provisioner operator watches for the `Druid` cr.
+
+4. When it finds one, it creates `Secret`, `Service`, etc. for the `Druid` cluster.
+
+5. `KubeDB` Ops-manager operator watches for `Druid`(5c), `Issuer/ClusterIssuer`(5b), `Secret` and `Service`(5a).
+
+6. When it finds all the resources(`Druid`, `Issuer/ClusterIssuer`, `Secret`, `Service`), it creates `Certificates` by using `tls.issuerRef` and `tls.certificates` field specification from `Druid` cr.
+
+7. `cert-manager` watches for certificates.
+
+8. When it finds one, it creates certificate secrets `tls-secrets`(server, client, exporter secrets etc.) that holds the actual certificate signed by the CA.
+
+9. `KubeDB` Provisioner operator watches for the Certificate secrets `tls-secrets`.
+
+10. When it finds all the tls-secret, it creates the related `PetSets` so that Druid database can be configured with TLS/SSL.
+
+In the next doc, we are going to show a step-by-step guide on how to configure a `Druid` cluster with TLS/SSL.
\ No newline at end of file
diff --git a/docs/guides/druid/tls/yamls/deep-storage-config.yaml b/docs/guides/druid/tls/yamls/deep-storage-config.yaml
new file mode 100644
index 0000000000..3612595828
--- /dev/null
+++ b/docs/guides/druid/tls/yamls/deep-storage-config.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+
diff --git a/docs/guides/druid/tls/yamls/druid-ca-issuer.yaml b/docs/guides/druid/tls/yamls/druid-ca-issuer.yaml
new file mode 100644
index 0000000000..d6298c972c
--- /dev/null
+++ b/docs/guides/druid/tls/yamls/druid-ca-issuer.yaml
@@ -0,0 +1,8 @@
+apiVersion: cert-manager.io/v1
+kind: Issuer
+metadata:
+ name: druid-ca-issuer
+ namespace: demo
+spec:
+ ca:
+ secretName: druid-ca
diff --git a/docs/guides/druid/tls/yamls/druid-cluster-tls.yaml b/docs/guides/druid/tls/yamls/druid-cluster-tls.yaml
new file mode 100644
index 0000000000..902b5b36d4
--- /dev/null
+++ b/docs/guides/druid/tls/yamls/druid-cluster-tls.yaml
@@ -0,0 +1,21 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster-tls
+ namespace: demo
+spec:
+ version: 28.0.1
+ enableSSL: true
+ tls:
+ issuerRef:
+ apiGroup: "cert-manager.io"
+ kind: Issuer
+ name: druid-ca-issuer
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ deletionPolicy: Delete
diff --git a/docs/guides/druid/update-version/_index.md b/docs/guides/druid/update-version/_index.md
new file mode 100644
index 0000000000..26c6ab4da1
--- /dev/null
+++ b/docs/guides/druid/update-version/_index.md
@@ -0,0 +1,10 @@
+---
+title: Update Version
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-update-version
+ name: Update Version
+ parent: guides-druid
+ weight: 60
+menu_name: docs_{{ .version }}
+---
\ No newline at end of file
diff --git a/docs/guides/druid/update-version/guide.md b/docs/guides/druid/update-version/guide.md
new file mode 100644
index 0000000000..f5c31ca64d
--- /dev/null
+++ b/docs/guides/druid/update-version/guide.md
@@ -0,0 +1,448 @@
+---
+title: Update Version of Druid
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-update-version-guide
+ name: Update Druid Version
+ parent: guides-druid-update-version
+ weight: 20
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Update version of Druid
+
+This guide will show you how to use `KubeDB` Ops-manager operator to update the version of `Druid` Combined or Topology.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [Druid](/docs/guides/druid/concepts/druid.md)
+ - [DruidOpsRequest](/docs/guides/druid/concepts/druidopsrequest.md)
+ - [Updating Overview](/docs/guides/druid/update-version/overview.md)
+
+To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial.
+
+```bash
+$ kubectl create ns demo
+namespace/demo created
+```
+
+> **Note:** YAML files used in this tutorial are stored in [docs/examples/druid](/docs/examples/druid) directory of [kubedb/docs](https://github.com/kube/docs) repository.
+
+## Prepare Druid
+
+Now, we are going to deploy a `Druid` cluster with version `28.0.1`.
+
+### Create External Dependency (Deep Storage)
+
+Before proceeding further, we need to prepare deep storage, which is one of the external dependency of Druid and used for storing the segments. It is a storage mechanism that Apache Druid does not provide. **Amazon S3**, **Google Cloud Storage**, or **Azure Blob Storage**, **S3-compatible storage** (like **Minio**), or **HDFS** are generally convenient options for deep storage.
+
+In this tutorial, we will run a `minio-server` as deep storage in our local `kind` cluster using `minio-operator` and create a bucket named `druid` in it, which the deployed druid database will use.
+
+```bash
+
+$ helm repo add minio https://operator.min.io/
+$ helm repo update minio
+$ helm upgrade --install --namespace "minio-operator" --create-namespace "minio-operator" minio/operator --set operator.replicaCount=1
+
+$ helm upgrade --install --namespace "demo" --create-namespace druid-minio minio/tenant \
+--set tenant.pools[0].servers=1 \
+--set tenant.pools[0].volumesPerServer=1 \
+--set tenant.pools[0].size=1Gi \
+--set tenant.certificate.requestAutoCert=false \
+--set tenant.buckets[0].name="druid" \
+--set tenant.pools[0].name="default"
+
+```
+
+Now we need to create a `Secret` named `deep-storage-config`. It contains the necessary connection information using which the druid database will connect to the deep storage.
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+```
+
+Let’s create the `deep-storage-config` Secret shown above:
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/update-version/yamls/deep-storage-config.yaml
+secret/deep-storage-config created
+```
+
+### Deploy Druid
+
+In this section, we are going to deploy a Druid topology cluster. Then, in the next section we will update the version using `DruidOpsRequest` CRD. Below is the YAML of the `Druid` CR that we are going to create,
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-quickstart
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ deletionPolicy: Delete
+```
+
+Let's create the `Druid` CR we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/update-version/yamls/druid-cluster.yaml
+druid.kubedb.com/druid-cluster created
+```
+
+Now, wait until `druid-cluster` created has status `Ready`. i.e,
+
+```bash
+$ kubectl get dr -n demo -w
+NAME TYPE VERSION STATUS AGE
+druid-cluster kubedb.com/v1aplha2 28.0.1 Provisioning 0s
+druid-cluster kubedb.com/v1aplha2 28.0.1 Provisioning 55s
+.
+.
+druid-cluster kubedb.com/v1aplha2 28.0.1 Ready 119s
+```
+
+We are now ready to apply the `DruidOpsRequest` CR to update.
+
+#### Check Druid Version from UI:
+
+You can also see the version of druid cluster from the druid ui. For that, follow the following steps:
+
+First, port-forward the port `8888` to local machine:
+
+```bash
+$ kubectl port-forward -n demo svc/druid-cluster-routers 8888
+Forwarding from 127.0.0.1:8888 -> 8888
+Forwarding from [::1]:8888 -> 8888
+```
+
+Now hit the `http://localhost:8888` from any browser, and you will be prompted to provide the credential of the druid database. By following the steps discussed below, you can get the credential generated by the KubeDB operator for your Druid database.
+
+**Connection information:**
+
+- Username:
+
+ ```bash
+ $ kubectl get secret -n demo druid-cluster-admin-cred -o jsonpath='{.data.username}' | base64 -d
+ admin
+ ```
+
+- Password:
+
+ ```bash
+ $ kubectl get secret -n demo druid-cluster-admin-cred -o jsonpath='{.data.password}' | base64 -d
+ LzJtVRX5E8MorFaf
+ ```
+
+After providing the credentials correctly, you should be able to access the web console like shown below.
+
+
+
+
+
+
+Here, we can see that the version of the druid cluster is `28.0.1`.
+
+### Update Druid Version
+
+Here, we are going to update `Druid` from `28.0.1` to `30.0.0`.
+
+#### Create DruidOpsRequest:
+
+In order to update the version, we have to create a `DruidOpsRequest` CR with your desired version that is supported by `KubeDB`. Below is the YAML of the `DruidOpsRequest` CR that we are going to create,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: druid-update-version
+ namespace: demo
+spec:
+ type: UpdateVersion
+ databaseRef:
+ name: druid-cluster
+ updateVersion:
+ targetVersion: 30.0.0
+ timeout: 5m
+ apply: IfReady
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing operation on `druid-cluster` Druid.
+- `spec.type` specifies that we are going to perform `UpdateVersion` on our database.
+- `spec.updateVersion.targetVersion` specifies the expected version of the database `30.0.0`.
+
+Let's create the `DruidOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/update-version/yamls/druid-hscale-up.yaml
+druidopsrequest.ops.kubedb.com/druid-update-version created
+```
+
+#### Verify Druid version updated successfully
+
+If everything goes well, `KubeDB` Ops-manager operator will update the image of `Druid` object and related `PetSets` and `Pods`.
+
+Let's wait for `DruidOpsRequest` to be `Successful`. Run the following command to watch `DruidOpsRequest` CR,
+
+```bash
+$ watch kubectl get druidopsrequest -n demo
+NAME TYPE STATUS AGE
+druid-update-version UpdateVersion Successful 2m6s
+```
+
+We can see from the above output that the `DruidOpsRequest` has succeeded. If we describe the `DruidOpsRequest` we will get an overview of the steps that were followed to update the database version.
+
+```bash
+$ kubectl describe druidopsrequest -n demo druid-update-version
+Name: druid-update-version
+Namespace: demo
+Labels:
+Annotations:
+API Version: ops.kubedb.com/v1alpha1
+Kind: DruidOpsRequest
+Metadata:
+ Creation Timestamp: 2024-10-21T13:04:51Z
+ Generation: 1
+ Managed Fields:
+ API Version: ops.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:metadata:
+ f:annotations:
+ .:
+ f:kubectl.kubernetes.io/last-applied-configuration:
+ f:spec:
+ .:
+ f:apply:
+ f:databaseRef:
+ f:timeout:
+ f:type:
+ f:updateVersion:
+ .:
+ f:targetVersion:
+ Manager: kubectl-client-side-apply
+ Operation: Update
+ Time: 2024-10-21T13:04:51Z
+ API Version: ops.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:status:
+ .:
+ f:conditions:
+ f:observedGeneration:
+ f:phase:
+ Manager: kubedb-ops-manager
+ Operation: Update
+ Subresource: status
+ Time: 2024-10-21T13:08:46Z
+ Resource Version: 103855
+ UID: 5d470e24-37fd-4e16-b7a3-33040dcefe3d
+Spec:
+ Apply: IfReady
+ Database Ref:
+ Name: druid-cluster
+ Timeout: 5m
+ Type: UpdateVersion
+ Update Version:
+ Target Version: 30.0.0
+Status:
+ Conditions:
+ Last Transition Time: 2024-10-21T13:04:51Z
+ Message: Druid ops-request has started to update version
+ Observed Generation: 1
+ Reason: UpdateVersion
+ Status: True
+ Type: UpdateVersion
+ Last Transition Time: 2024-10-21T13:04:56Z
+ Message: successfully reconciled the Druid with updated version
+ Observed Generation: 1
+ Reason: UpdatePetSets
+ Status: True
+ Type: UpdatePetSets
+ Last Transition Time: 2024-10-21T13:08:46Z
+ Message: Successfully Restarted Druid nodes
+ Observed Generation: 1
+ Reason: RestartPods
+ Status: True
+ Type: RestartPods
+ Last Transition Time: 2024-10-21T13:05:01Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-historicals-0
+ Last Transition Time: 2024-10-21T13:05:01Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-historicals-0
+ Last Transition Time: 2024-10-21T13:08:01Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-historicals-0
+ Last Transition Time: 2024-10-21T13:08:06Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-middlemanagers-0
+ Last Transition Time: 2024-10-21T13:08:06Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-middlemanagers-0
+ Last Transition Time: 2024-10-21T13:08:11Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-middlemanagers-0
+ Last Transition Time: 2024-10-21T13:08:16Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-brokers-0
+ Last Transition Time: 2024-10-21T13:08:16Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-brokers-0
+ Last Transition Time: 2024-10-21T13:08:21Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-brokers-0
+ Last Transition Time: 2024-10-21T13:08:26Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-routers-0
+ Last Transition Time: 2024-10-21T13:08:26Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-routers-0
+ Last Transition Time: 2024-10-21T13:08:31Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-routers-0
+ Last Transition Time: 2024-10-21T13:08:36Z
+ Message: get pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--druid-cluster-coordinators-0
+ Last Transition Time: 2024-10-21T13:08:36Z
+ Message: evict pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--druid-cluster-coordinators-0
+ Last Transition Time: 2024-10-21T13:08:41Z
+ Message: check pod running; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--druid-cluster-coordinators-0
+ Last Transition Time: 2024-10-21T13:08:46Z
+ Message: Successfully completed update druid version
+ Observed Generation: 1
+ Reason: Successful
+ Status: True
+ Type: Successful
+ Observed Generation: 1
+ Phase: Successful
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Starting 21m KubeDB Ops-manager Operator Start processing for DruidOpsRequest: demo/druid-update-version
+ Normal UpdatePetSets 21m KubeDB Ops-manager Operator successfully reconciled the Druid with updated version
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-historicals-0 21m KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-historicals-0 21m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Warning check pod running; ConditionStatus:False; PodName:druid-cluster-historicals-0 21m KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:druid-cluster-historicals-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-historicals-0 18m KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-historicals-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0 18m KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0 18m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0 18m KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-middlemanagers-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-brokers-0 17m KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-brokers-0 17m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-brokers-0 17m KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-brokers-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-routers-0 17m KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-routers-0 17m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-routers-0 17m KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-routers-0
+ Warning get pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0 17m KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Warning evict pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0 17m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Warning check pod running; ConditionStatus:True; PodName:druid-cluster-coordinators-0 17m KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:druid-cluster-coordinators-0
+ Normal RestartPods 17m KubeDB Ops-manager Operator Successfully Restarted Druid nodes
+ Normal Starting 17m KubeDB Ops-manager Operator Resuming Druid database: demo/druid-cluster
+ Normal Successful 17m KubeDB Ops-manager Operator Successfully resumed Druid database: demo/druid-cluster for DruidOpsRequest: druid-update-version
+```
+
+Now, we are going to verify whether the `Druid` and the related `PetSets` and their `Pods` have the new version image. Let's check,
+
+```bash
+$ kubectl get dr -n demo druid-cluster -o=jsonpath='{.spec.version}{"\n"}'
+30.0.0
+
+$ kubectl get petset -n demo druid-cluster-brokers -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}'
+ghcr.io/appscode-images/druid:30.0.0@sha256:4cd60a1dc6a124e27e91ec52ca39e2b9ca6809df915ae2dd712a2dd7462626d7
+
+$ kubectl get pods -n demo druid-cluster-brokers-0 -o=jsonpath='{.spec.containers[0].image}{"\n"}'
+ghcr.io/appscode-images/druid:30.0.0
+```
+
+You can see from above, our `Druid` has been updated with the new version. So, the updateVersion process is successfully completed.
+
+#### Verify updated Druid Version from UI:
+
+You can also see the version of druid cluster from the druid ui by following the steps described previously in this tutorial. [Check Druid Version from UI](/docs/guides/druid/update-version/guide.md/#Check-Druid-Version-from-UI)
+
+If you follow the steps properly, you should be able to see that the version is upgraded to `30.0.0` from the druid console as shown below.
+
+
+
+
+
+## Cleaning Up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+```bash
+kubectl delete druidopsrequest -n demo druid-update-version
+kubectl delete dr -n demo druid-cluster
+kubectl delete ns demo
+```
+
+## Next Steps
+
+- Detail concepts of [Druid object](/docs/guides/druid/concepts/druid.md).
+- Different Druid topology clustering modes [here](/docs/guides/druid/clustering/_index.md).
+- Monitor your Druid database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/druid/monitoring/using-prometheus-operator.md).
+
+[//]: # (- Monitor your Druid database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/druid/monitoring/using-builtin-prometheus.md).)
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
diff --git a/docs/guides/druid/update-version/images/dr-update-version.png b/docs/guides/druid/update-version/images/dr-update-version.png
new file mode 100644
index 0000000000..b61db35bc0
Binary files /dev/null and b/docs/guides/druid/update-version/images/dr-update-version.png differ
diff --git a/docs/guides/druid/update-version/images/druid-ui-28.png b/docs/guides/druid/update-version/images/druid-ui-28.png
new file mode 100644
index 0000000000..d5c74a4ad2
Binary files /dev/null and b/docs/guides/druid/update-version/images/druid-ui-28.png differ
diff --git a/docs/guides/druid/update-version/images/druid-ui-30.png b/docs/guides/druid/update-version/images/druid-ui-30.png
new file mode 100644
index 0000000000..f1da2d8ef1
Binary files /dev/null and b/docs/guides/druid/update-version/images/druid-ui-30.png differ
diff --git a/docs/guides/druid/update-version/overview.md b/docs/guides/druid/update-version/overview.md
new file mode 100644
index 0000000000..b1c2f21ef5
--- /dev/null
+++ b/docs/guides/druid/update-version/overview.md
@@ -0,0 +1,53 @@
+---
+title: Update Version Overview
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-update-version-overview
+ name: Overview
+ parent: guides-druid-update-version
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Druid Update Version Overview
+
+This guide will give you an overview on how KubeDB Ops-manager operator update the version of `Druid`.
+
+## Before You Begin
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [Druid](/docs/guides/druid/concepts/druid.md)
+ - [DruidOpsRequest](/docs/guides/druid/concepts/druidopsrequest.md)
+
+## How update version Process Works
+
+The following diagram shows how KubeDB Ops-manager operator used to update the version of `Druid`. Open the image in a new tab to see the enlarged version.
+
+
+
+The updating process consists of the following steps:
+
+1. At first, a user creates a `Druid` Custom Resource (CR).
+
+2. `KubeDB` Provisioner operator watches the `Druid` CR.
+
+3. When the operator finds a `Druid` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc.
+
+4. Then, in order to update the version of the `Druid` database the user creates a `DruidOpsRequest` CR with the desired version.
+
+5. `KubeDB` Ops-manager operator watches the `DruidOpsRequest` CR.
+
+6. When it finds a `DruidOpsRequest` CR, it halts the `Druid` object which is referred from the `DruidOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `Druid` object during the updating process.
+
+7. By looking at the target version from `DruidOpsRequest` CR, `KubeDB` Ops-manager operator updates the images of all the `PetSets`.
+
+8. After successfully updating the `PetSets` and their `Pods` images, the `KubeDB` Ops-manager operator updates the image of the `Druid` object to reflect the updated state of the database.
+
+9. After successfully updating of `Druid` object, the `KubeDB` Ops-manager operator resumes the `Druid` object so that the `KubeDB` Provisioner operator can resume its usual operations.
+
+In the next doc, we are going to show a step by step guide on updating of a Druid database using updateVersion operation.
\ No newline at end of file
diff --git a/docs/guides/druid/update-version/yamls/deep-storage-config.yaml b/docs/guides/druid/update-version/yamls/deep-storage-config.yaml
new file mode 100644
index 0000000000..3612595828
--- /dev/null
+++ b/docs/guides/druid/update-version/yamls/deep-storage-config.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+
diff --git a/docs/guides/druid/update-version/yamls/druid-cluster.yaml b/docs/guides/druid/update-version/yamls/druid-cluster.yaml
new file mode 100644
index 0000000000..7a89d0dc91
--- /dev/null
+++ b/docs/guides/druid/update-version/yamls/druid-cluster.yaml
@@ -0,0 +1,15 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ routers:
+ replicas: 1
+ deletionPolicy: Delete
diff --git a/docs/guides/druid/update-version/yamls/update-version-ops.yaml b/docs/guides/druid/update-version/yamls/update-version-ops.yaml
new file mode 100644
index 0000000000..a6aaa91063
--- /dev/null
+++ b/docs/guides/druid/update-version/yamls/update-version-ops.yaml
@@ -0,0 +1,13 @@
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: druid-update-version
+ namespace: demo
+spec:
+ type: UpdateVersion
+ databaseRef:
+ name: druid-cluster
+ updateVersion:
+ targetVersion: 30.0.0
+ timeout: 5m
+ apply: IfReady
\ No newline at end of file
diff --git a/docs/guides/druid/volume-expansion/_index.md b/docs/guides/druid/volume-expansion/_index.md
new file mode 100644
index 0000000000..50632cd875
--- /dev/null
+++ b/docs/guides/druid/volume-expansion/_index.md
@@ -0,0 +1,10 @@
+---
+title: Volume Expansion
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-volume-expansion
+ name: Volume Expansion
+ parent: guides-druid
+ weight: 80
+menu_name: docs_{{ .version }}
+---
\ No newline at end of file
diff --git a/docs/guides/druid/volume-expansion/guide.md b/docs/guides/druid/volume-expansion/guide.md
new file mode 100644
index 0000000000..d9a110aa18
--- /dev/null
+++ b/docs/guides/druid/volume-expansion/guide.md
@@ -0,0 +1,498 @@
+---
+title: Druid Topology Volume Expansion
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-volume-expansion-guide
+ name: Druid Volume Expansion
+ parent: guides-druid-volume-expansion
+ weight: 30
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Druid Topology Volume Expansion
+
+This guide will show you how to use `KubeDB` Ops-manager operator to expand the volume of a Druid Topology Cluster.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster.
+
+- You must have a `StorageClass` that supports volume expansion.
+
+- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [Druid](/docs/guides/druid/concepts/druid.md)
+ - [Topology](/docs/guides/druid/clustering/overview/index.md)
+ - [DruidOpsRequest](/docs/guides/druid/concepts/druidopsrequest.md)
+ - [Volume Expansion Overview](/docs/guides/druid/volume-expansion/overview.md)
+
+To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial.
+
+```bash
+$ kubectl create ns demo
+namespace/demo created
+```
+
+> Note: The yaml files used in this tutorial are stored in [docs/examples/druid](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/druid) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
+
+## Expand Volume of Topology Druid Cluster
+
+Here, we are going to deploy a `Druid` topology using a supported version by `KubeDB` operator. Then we are going to apply `DruidOpsRequest` to expand its volume.
+
+### Prepare Druid Topology Cluster
+
+At first verify that your cluster has a storage class, that supports volume expansion. Let's check,
+
+```bash
+$ kubectl get storageclass
+NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
+local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 28h
+longhorn (default) driver.longhorn.io Delete Immediate true 27h
+longhorn-static driver.longhorn.io Delete Immediate true 27h
+```
+
+We can see from the output the `longhorn` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it.
+
+### Create External Dependency (Deep Storage)
+
+Before proceeding further, we need to prepare deep storage, which is one of the external dependency of Druid and used for storing the segments. It is a storage mechanism that Apache Druid does not provide. **Amazon S3**, **Google Cloud Storage**, or **Azure Blob Storage**, **S3-compatible storage** (like **Minio**), or **HDFS** are generally convenient options for deep storage.
+
+In this tutorial, we will run a `minio-server` as deep storage in our local `kind` cluster using `minio-operator` and create a bucket named `druid` in it, which the deployed druid database will use.
+
+```bash
+$ helm repo add minio https://operator.min.io/
+$ helm repo update minio
+$ helm upgrade --install --namespace "minio-operator" --create-namespace "minio-operator" minio/operator --set operator.replicaCount=1
+
+$ helm upgrade --install --namespace "demo" --create-namespace druid-minio minio/tenant \
+--set tenant.pools[0].servers=1 \
+--set tenant.pools[0].volumesPerServer=1 \
+--set tenant.pools[0].size=1Gi \
+--set tenant.certificate.requestAutoCert=false \
+--set tenant.buckets[0].name="druid" \
+--set tenant.pools[0].name="default"
+
+```
+
+Now we need to create a `Secret` named `deep-storage-config`. It contains the necessary connection information using which the druid database will connect to the deep storage.
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+```
+
+Let’s create the `deep-storage-config` Secret shown above:
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/volume-expansion/yamls/deep-storage-config.yaml
+secret/deep-storage-config created
+```
+
+Now, we are going to deploy a `Druid` combined cluster with version `28.0.1`.
+
+### Deploy Druid
+
+In this section, we are going to deploy a Druid topology cluster for historicals and middleManagers with 1GB volume. Then, in the next section we will expand its volume to 2GB using `DruidOpsRequest` CRD. Below is the YAML of the `Druid` CR that we are going to create,
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ historicals:
+ replicas: 1
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ storageType: Durable
+ middleManagers:
+ replicas: 1
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ storageType: Durable
+ routers:
+ replicas: 1
+ deletionPolicy: Delete
+```
+
+Let's create the `Druid` CR we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/volume-expansion/yamls/druid-topology.yaml
+druid.kubedb.com/druid-cluster created
+```
+
+Now, wait until `druid-cluster` has status `Ready`. i.e,
+
+```bash
+$ kubectl get dr -n demo -w
+NAME TYPE VERSION STATUS AGE
+druid-cluster kubedb.com/v1alpha2 28.0.1 Provisioning 0s
+druid-cluster kubedb.com/v1alpha2 28.0.1 Provisioning 9s
+.
+.
+druid-cluster kubedb.com/v1alpha2 28.0.1 Ready 3m26s
+```
+
+Let's check volume size from petset, and from the persistent volume,
+
+```bash
+$ kubectl get petset -n demo druid-cluster-historicals -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
+"1Gi"
+
+$ kubectl get petset -n demo druid-cluster-middleManagers -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
+"1Gi"
+
+$ kubectl get pv -n demo
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
+pvc-0bf49077-1c7a-4943-bb17-1dffd1626dcd 1Gi RWO Delete Bound demo/druid-cluster-segment-cache-druid-cluster-historicals-0 longhorn 10m
+pvc-59ed4914-53b3-4f18-a6aa-7699c2b738e2 1Gi RWO Delete Bound demo/druid-cluster-base-task-dir-druid-cluster-middlemanagers-0 longhorn 10m
+```
+
+You can see the petsets have 1GB storage, and the capacity of all the persistent volumes are also 1GB.
+
+We are now ready to apply the `DruidOpsRequest` CR to expand the volume of this database.
+
+### Volume Expansion
+
+Here, we are going to expand the volume of the druid topology cluster.
+
+#### Create DruidOpsRequest
+
+In order to expand the volume of the database, we have to create a `DruidOpsRequest` CR with our desired volume size. Below is the YAML of the `DruidOpsRequest` CR that we are going to create,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: dr-volume-exp
+ namespace: demo
+spec:
+ type: VolumeExpansion
+ databaseRef:
+ name: druid-cluster
+ volumeExpansion:
+ historicals: 2Gi
+ middleManagers: 2Gi
+ mode: Offline
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing volume expansion operation on `druid-cluster`.
+- `spec.type` specifies that we are performing `VolumeExpansion` on our database.
+- `spec.volumeExpansion.historicals` specifies the desired volume size for historicals node.
+- `spec.volumeExpansion.middleManagers` specifies the desired volume size for middleManagers node.
+- `spec.volumeExpansion.mode` specifies the desired volume expansion mode(`Online` or `Offline`).
+
+During `Online` VolumeExpansion KubeDB expands volume without pausing database object, it directly updates the underlying PVC. And for `Offline` volume expansion, the database is paused. The Pods are deleted and PVC is updated. Then the database Pods are recreated with updated PVC.
+
+> If you want to expand the volume of only one node, you can specify the desired volume size for that node only.
+
+Let's create the `DruidOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/druid/volume-expansion/yamls/druid-volume-expansion-topology.yaml
+druidopsrequest.ops.kubedb.com/dr-volume-exp created
+```
+
+#### Verify Druid Topology volume expanded successfully
+
+If everything goes well, `KubeDB` Ops-manager operator will update the volume size of `Druid` object and related `PetSets` and `Persistent Volumes`.
+
+Let's wait for `DruidOpsRequest` to be `Successful`. Run the following command to watch `DruidOpsRequest` CR,
+
+```bash
+$ kubectl get druidopsrequest -n demo
+NAME TYPE STATUS AGE
+dr-volume-exp VolumeExpansion Successful 3m1s
+```
+
+We can see from the above output that the `DruidOpsRequest` has succeeded. If we describe the `DruidOpsRequest` we will get an overview of the steps that were followed to expand the volume of druid.
+
+```bash
+$ kubectl describe druidopsrequest -n demo dr-volume-exp
+Name: dr-volume-exp
+Namespace: demo
+Labels:
+Annotations:
+API Version: ops.kubedb.com/v1alpha1
+Kind: DruidOpsRequest
+Metadata:
+ Creation Timestamp: 2024-10-25T09:22:02Z
+ Generation: 1
+ Managed Fields:
+ API Version: ops.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:metadata:
+ f:annotations:
+ .:
+ f:kubectl.kubernetes.io/last-applied-configuration:
+ f:spec:
+ .:
+ f:apply:
+ f:databaseRef:
+ f:type:
+ f:volumeExpansion:
+ .:
+ f:historicals:
+ f:middleManagers:
+ f:mode:
+ Manager: kubectl-client-side-apply
+ Operation: Update
+ Time: 2024-10-25T09:22:02Z
+ API Version: ops.kubedb.com/v1alpha1
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:status:
+ .:
+ f:conditions:
+ f:observedGeneration:
+ f:phase:
+ Manager: kubedb-ops-manager
+ Operation: Update
+ Subresource: status
+ Time: 2024-10-25T09:24:35Z
+ Resource Version: 221378
+ UID: 2407cfa7-8d3b-463e-abf7-1910249009bd
+Spec:
+ Apply: IfReady
+ Database Ref:
+ Name: druid-cluster
+ Type: VolumeExpansion
+ Volume Expansion:
+ Historicals: 2Gi
+ Middle Managers: 2Gi
+ Mode: Offline
+Status:
+ Conditions:
+ Last Transition Time: 2024-10-25T09:22:02Z
+ Message: Druid ops-request has started to expand volume of druid nodes.
+ Observed Generation: 1
+ Reason: VolumeExpansion
+ Status: True
+ Type: VolumeExpansion
+ Last Transition Time: 2024-10-25T09:22:10Z
+ Message: get pet set; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: GetPetSet
+ Last Transition Time: 2024-10-25T09:22:10Z
+ Message: is pet set deleted; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: IsPetSetDeleted
+ Last Transition Time: 2024-10-25T09:22:30Z
+ Message: successfully deleted the petSets with orphan propagation policy
+ Observed Generation: 1
+ Reason: OrphanPetSetPods
+ Status: True
+ Type: OrphanPetSetPods
+ Last Transition Time: 2024-10-25T09:22:35Z
+ Message: get pod; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: GetPod
+ Last Transition Time: 2024-10-25T09:22:35Z
+ Message: is ops req patched; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: IsOpsReqPatched
+ Last Transition Time: 2024-10-25T09:22:35Z
+ Message: create pod; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: CreatePod
+ Last Transition Time: 2024-10-25T09:22:40Z
+ Message: get pvc; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: GetPvc
+ Last Transition Time: 2024-10-25T09:22:40Z
+ Message: is pvc patched; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: IsPvcPatched
+ Last Transition Time: 2024-10-25T09:23:50Z
+ Message: compare storage; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: CompareStorage
+ Last Transition Time: 2024-10-25T09:23:00Z
+ Message: create; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: Create
+ Last Transition Time: 2024-10-25T09:23:08Z
+ Message: is druid running; ConditionStatus:False
+ Observed Generation: 1
+ Status: False
+ Type: IsDruidRunning
+ Last Transition Time: 2024-10-25T09:23:20Z
+ Message: successfully updated middleManagers node PVC sizes
+ Observed Generation: 1
+ Reason: UpdateMiddleManagersNodePVCs
+ Status: True
+ Type: UpdateMiddleManagersNodePVCs
+ Last Transition Time: 2024-10-25T09:24:15Z
+ Message: successfully updated historicals node PVC sizes
+ Observed Generation: 1
+ Reason: UpdateHistoricalsNodePVCs
+ Status: True
+ Type: UpdateHistoricalsNodePVCs
+ Last Transition Time: 2024-10-25T09:24:30Z
+ Message: successfully reconciled the Druid resources
+ Observed Generation: 1
+ Reason: UpdatePetSets
+ Status: True
+ Type: UpdatePetSets
+ Last Transition Time: 2024-10-25T09:24:35Z
+ Message: PetSet is recreated
+ Observed Generation: 1
+ Reason: ReadyPetSets
+ Status: True
+ Type: ReadyPetSets
+ Last Transition Time: 2024-10-25T09:24:35Z
+ Message: Successfully completed volumeExpansion for Druid
+ Observed Generation: 1
+ Reason: Successful
+ Status: True
+ Type: Successful
+ Observed Generation: 1
+ Phase: Successful
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Starting 10m KubeDB Ops-manager Operator Start processing for DruidOpsRequest: demo/dr-volume-exp
+ Normal Starting 10m KubeDB Ops-manager Operator Pausing Druid databse: demo/druid-cluster
+ Normal Successful 10m KubeDB Ops-manager Operator Successfully paused Druid database: demo/druid-cluster for DruidOpsRequest: dr-volume-exp
+ Warning get pet set; ConditionStatus:True 10m KubeDB Ops-manager Operator get pet set; ConditionStatus:True
+ Warning is pet set deleted; ConditionStatus:True 10m KubeDB Ops-manager Operator is pet set deleted; ConditionStatus:True
+ Warning get pet set; ConditionStatus:True 10m KubeDB Ops-manager Operator get pet set; ConditionStatus:True
+ Warning get pet set; ConditionStatus:True 10m KubeDB Ops-manager Operator get pet set; ConditionStatus:True
+ Warning is pet set deleted; ConditionStatus:True 10m KubeDB Ops-manager Operator is pet set deleted; ConditionStatus:True
+ Warning get pet set; ConditionStatus:True 10m KubeDB Ops-manager Operator get pet set; ConditionStatus:True
+ Normal OrphanPetSetPods 9m59s KubeDB Ops-manager Operator successfully deleted the petSets with orphan propagation policy
+ Warning get pod; ConditionStatus:True 9m54s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning is ops req patched; ConditionStatus:True 9m54s KubeDB Ops-manager Operator is ops req patched; ConditionStatus:True
+ Warning create pod; ConditionStatus:True 9m54s KubeDB Ops-manager Operator create pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 9m49s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 9m49s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning is pvc patched; ConditionStatus:True 9m49s KubeDB Ops-manager Operator is pvc patched; ConditionStatus:True
+ Warning compare storage; ConditionStatus:False 9m49s KubeDB Ops-manager Operator compare storage; ConditionStatus:False
+ Warning get pod; ConditionStatus:True 9m44s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 9m44s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 9m39s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 9m39s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 9m34s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 9m34s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 9m29s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 9m29s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning compare storage; ConditionStatus:True 9m29s KubeDB Ops-manager Operator compare storage; ConditionStatus:True
+ Warning create; ConditionStatus:True 9m29s KubeDB Ops-manager Operator create; ConditionStatus:True
+ Warning is ops req patched; ConditionStatus:True 9m29s KubeDB Ops-manager Operator is ops req patched; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 9m24s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning is druid running; ConditionStatus:False 9m21s KubeDB Ops-manager Operator is druid running; ConditionStatus:False
+ Warning get pod; ConditionStatus:True 9m19s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 9m14s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Normal UpdateMiddleManagersNodePVCs 9m9s KubeDB Ops-manager Operator successfully updated middleManagers node PVC sizes
+ Warning get pod; ConditionStatus:True 9m4s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning is ops req patched; ConditionStatus:True 9m4s KubeDB Ops-manager Operator is ops req patched; ConditionStatus:True
+ Warning create pod; ConditionStatus:True 9m4s KubeDB Ops-manager Operator create pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 8m59s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 8m59s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning is pvc patched; ConditionStatus:True 8m59s KubeDB Ops-manager Operator is pvc patched; ConditionStatus:True
+ Warning compare storage; ConditionStatus:False 8m59s KubeDB Ops-manager Operator compare storage; ConditionStatus:False
+ Warning get pod; ConditionStatus:True 8m54s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 8m54s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 8m49s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 8m49s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 8m44s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 8m44s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 8m39s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 8m39s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning compare storage; ConditionStatus:True 8m39s KubeDB Ops-manager Operator compare storage; ConditionStatus:True
+ Warning create; ConditionStatus:True 8m39s KubeDB Ops-manager Operator create; ConditionStatus:True
+ Warning is ops req patched; ConditionStatus:True 8m39s KubeDB Ops-manager Operator is ops req patched; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 8m34s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning is druid running; ConditionStatus:False 8m31s KubeDB Ops-manager Operator is druid running; ConditionStatus:False
+ Warning get pod; ConditionStatus:True 8m29s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 8m24s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 8m19s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Normal UpdateHistoricalsNodePVCs 8m14s KubeDB Ops-manager Operator successfully updated historicals node PVC sizes
+ Normal UpdatePetSets 7m59s KubeDB Ops-manager Operator successfully reconciled the Druid resources
+ Warning get pet set; ConditionStatus:True 7m54s KubeDB Ops-manager Operator get pet set; ConditionStatus:True
+ Warning get pet set; ConditionStatus:True 7m54s KubeDB Ops-manager Operator get pet set; ConditionStatus:True
+ Normal ReadyPetSets 7m54s KubeDB Ops-manager Operator PetSet is recreated
+ Normal Starting 7m54s KubeDB Ops-manager Operator Resuming Druid database: demo/druid-cluster
+ Normal Successful 7m54s KubeDB Ops-manager Operator Successfully resumed Druid database: demo/druid-cluster for DruidOpsRequest: dr-volume-exp
+```
+
+Now, we are going to verify from the `Petset`, and the `Persistent Volumes` whether the volume of the database has expanded to meet the desired state, Let's check,
+
+```bash
+$ kubectl get petset -n demo druid-cluster-historicals -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
+"3Gi"
+
+$ kubectl get petset -n demo druid-cluster-middleManagers -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
+"2Gi"
+
+$ kubectl get pv -n demo
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
+pvc-0bf49077-1c7a-4943-bb17-1dffd1626dcd 2Gi RWO Delete Bound demo/druid-cluster-segment-cache-druid-cluster-historicals-0 longhorn 23m
+pvc-59ed4914-53b3-4f18-a6aa-7699c2b738e2 2Gi RWO Delete Bound demo/druid-cluster-base-task-dir-druid-cluster-middlemanagers-0 longhorn 23m
+```
+
+The above output verifies that we have successfully expanded the volume of the Druid.
+
+## Cleaning Up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+```bash
+kubectl delete druidopsrequest -n demo dr-volume-exp
+kubectl delete dr -n demo druid-cluster
+kubectl delete ns demo
+```
+
+## Next Steps
+
+- Detail concepts of [Druid object](/docs/guides/druid/concepts/druid.md).
+- Different Druid topology clustering modes [here](/docs/guides/druid/clustering/_index.md).
+- Monitor your Druid database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/druid/monitoring/using-prometheus-operator.md).
+-
+[//]: # (- Monitor your Druid database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/druid/monitoring/using-builtin-prometheus.md).)
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
diff --git a/docs/guides/druid/volume-expansion/images/druid-volume-expansion.png b/docs/guides/druid/volume-expansion/images/druid-volume-expansion.png
new file mode 100644
index 0000000000..9e2b77d6cd
Binary files /dev/null and b/docs/guides/druid/volume-expansion/images/druid-volume-expansion.png differ
diff --git a/docs/guides/druid/volume-expansion/overview.md b/docs/guides/druid/volume-expansion/overview.md
new file mode 100644
index 0000000000..a9290612ff
--- /dev/null
+++ b/docs/guides/druid/volume-expansion/overview.md
@@ -0,0 +1,56 @@
+---
+title: Druid Volume Expansion Overview
+menu:
+ docs_{{ .version }}:
+ identifier: guides-druid-volume-expansion-overview
+ name: Overview
+ parent: guides-druid-volume-expansion
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Druid Volume Expansion
+
+This guide will give an overview on how KubeDB Ops-manager operator expand the volume of various component of `Druid` like:. (Combined and Topology).
+
+## Before You Begin
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [Druid](/docs/guides/druid/concepts/druid.md)
+ - [DruidOpsRequest](/docs/guides/druid/concepts/druidopsrequest.md)
+
+## How Volume Expansion Process Works
+
+The following diagram shows how KubeDB Ops-manager operator expand the volumes of `Druid` database components. Open the image in a new tab to see the enlarged version.
+
+
+
+The Volume Expansion process consists of the following steps:
+
+1. At first, a user creates a `Druid` Custom Resource (CR).
+
+2. `KubeDB` Provisioner operator watches the `Druid` CR.
+
+3. When the operator finds a `Druid` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc.
+
+4. Each PetSet creates a Persistent Volume according to the Volume Claim Template provided in the petset configuration. This Persistent Volume will be expanded by the `KubeDB` Ops-manager operator.
+
+5. Then, in order to expand the volume of the druid data components (ie. Historicals, MiddleManagers) of the `Druid`, the user creates a `DruidOpsRequest` CR with desired information.
+
+6. `KubeDB` Ops-manager operator watches the `DruidOpsRequest` CR.
+
+7. When it finds a `DruidOpsRequest` CR, it halts the `Druid` object which is referred from the `DruidOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `Druid` object during the volume expansion process.
+
+8. Then the `KubeDB` Ops-manager operator will expand the persistent volume to reach the expected size defined in the `DruidOpsRequest` CR.
+
+9. After the successful Volume Expansion of the related PetSet Pods, the `KubeDB` Ops-manager operator updates the new volume size in the `Druid` object to reflect the updated state.
+
+10. After the successful Volume Expansion of the `Druid` components, the `KubeDB` Ops-manager operator resumes the `Druid` object so that the `KubeDB` Provisioner operator resumes its usual operations.
+
+In the next docs, we are going to show a step-by-step guide on Volume Expansion of various Druid database components using `DruidOpsRequest` CRD.
diff --git a/docs/guides/druid/volume-expansion/yamls/deep-storage-config.yaml b/docs/guides/druid/volume-expansion/yamls/deep-storage-config.yaml
new file mode 100644
index 0000000000..3612595828
--- /dev/null
+++ b/docs/guides/druid/volume-expansion/yamls/deep-storage-config.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.enablePathStyleAccess: "true"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+
diff --git a/docs/guides/druid/volume-expansion/yamls/druid-cluster.yaml b/docs/guides/druid/volume-expansion/yamls/druid-cluster.yaml
new file mode 100644
index 0000000000..cb8e321237
--- /dev/null
+++ b/docs/guides/druid/volume-expansion/yamls/druid-cluster.yaml
@@ -0,0 +1,34 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-cluster
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ topology:
+ historicals:
+ replicas: 1
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ storageType: Durable
+ middleManagers:
+ replicas: 1
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ storageType: Durable
+ routers:
+ replicas: 1
+ deletionPolicy: Delete
+
diff --git a/docs/guides/druid/volume-expansion/yamls/volume-expansion-ops.yaml b/docs/guides/druid/volume-expansion/yamls/volume-expansion-ops.yaml
new file mode 100644
index 0000000000..b5ad80546b
--- /dev/null
+++ b/docs/guides/druid/volume-expansion/yamls/volume-expansion-ops.yaml
@@ -0,0 +1,13 @@
+apiVersion: ops.kubedb.com/v1alpha1
+kind: DruidOpsRequest
+metadata:
+ name: dr-volume-exp
+ namespace: demo
+spec:
+ type: VolumeExpansion
+ databaseRef:
+ name: druid-cluster
+ volumeExpansion:
+ historicals: 2Gi
+ middleManagers: 2Gi
+ mode: Offline
\ No newline at end of file
diff --git a/docs/guides/elasticsearch/README.md b/docs/guides/elasticsearch/README.md
index 4dcbfd21d5..d71a8d7d5f 100644
--- a/docs/guides/elasticsearch/README.md
+++ b/docs/guides/elasticsearch/README.md
@@ -17,28 +17,27 @@ aliases:
## Elasticsearch Features
-| Features | Community | Enterprise |
-|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| :----------: |:----------:|
-| Combined Cluster (n nodes with master,data,ingest: ture; n >= 1 ) | ✓ | ✓ |
-| Topology Cluster (n master, m data, x ingest nodes; n,m,x >= 1 ) | ✓ | ✓ |
-| Hot-Warm-Cold Topology Cluster (a hot, b warm, c cold nodes; a,b,c >= 1 ) | ✓ | ✓ |
-| TLS: Add, Remove, Update, Rotate ( [Cert Manager](https://cert-manager.io/docs/) ) | ✗ | ✓ |
-| Automated Version Update | ✗ | ✓ |
-| Automatic Vertical Scaling | ✗ | ✓ |
-| Automated Horizontal Scaling | ✗ | ✓ |
-| Automated Volume Expansion | ✗ | ✓ |
-| Backup/Recovery: Instant, Scheduled ( [Stash](https://stash.run/) ) | ✓ | ✓ |
-| Dashboard ( Kibana , Opensearch-Dashboards ) | ✓ | ✓ |
-| Grafana Dashboards | ✗ | ✓ |
-| Initialization from Snapshot ( [Stash](https://stash.run/) ) | ✓ | ✓ |
-| Authentication ( [OpensSearch](https://opensearch.org/) / [X-Pack](https://www.elastic.co/guide/en/elasticsearch/reference/7.9/setup-xpack.html) / [OpenDistro](https://opendistro.github.io/for-elasticsearch-docs/) / [Search Guard](https://docs.search-guard.com/latest/) ) | ✓ | ✓ |
-| Authorization ( [OpensSearch](https://opensearch.org/) / [X-Pack](https://www.elastic.co/guide/en/elasticsearch/reference/7.9/setup-xpack.html) / [OpenDistro](https://opendistro.github.io/for-elasticsearch-docs/) / [Search Guard](https://docs.search-guard.com/latest/) ) | ✓ | ✓ |
-| Persistent Volume | ✓ | ✓ |
-| Exports Prometheus Matrices | ✓ | ✓ |
-| Custom Configuration | ✓ | ✓ |
-| Using Custom Docker Image | ✓ | ✓ |
-| Initialization From Script | ✗ | ✗ |
-| Autoscaling (vertically) | ✗ | ✓ |
+| Features | Availability |
+|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------:|
+| Combined Cluster (n nodes with master,data,ingest: ture; n >= 1 ) | ✓ |
+| Topology Cluster (n master, m data, x ingest nodes; n,m,x >= 1 ) | ✓ |
+| Hot-Warm-Cold Topology Cluster (a hot, b warm, c cold nodes; a,b,c >= 1 ) | ✓ |
+| TLS: Add, Remove, Update, Rotate ( [Cert Manager](https://cert-manager.io/docs/) ) | ✓ |
+| Automated Version Update | ✓ |
+| Automatic Vertical Scaling | ✓ |
+| Automated Horizontal Scaling | ✓ |
+| Automated Volume Expansion | ✓ |
+| Backup/Recovery: Instant, Scheduled ( [Stash](https://stash.run/) ) | ✓ |
+| Dashboard ( Kibana , Opensearch-Dashboards ) | ✓ |
+| Grafana Dashboards | ✓ |
+| Initialization from Snapshot ( [Stash](https://stash.run/) ) | ✓ |
+| Authentication ( [OpensSearch](https://opensearch.org/) / [X-Pack](https://www.elastic.co/guide/en/elasticsearch/reference/7.9/setup-xpack.html) / [OpenDistro](https://opendistro.github.io/for-elasticsearch-docs/) / [Search Guard](https://docs.search-guard.com/latest/) ) | ✓ |
+| Authorization ( [OpensSearch](https://opensearch.org/) / [X-Pack](https://www.elastic.co/guide/en/elasticsearch/reference/7.9/setup-xpack.html) / [OpenDistro](https://opendistro.github.io/for-elasticsearch-docs/) / [Search Guard](https://docs.search-guard.com/latest/) ) | ✓ |
+| Persistent Volume | ✓ |
+| Exports Prometheus Matrices | ✓ |
+| Custom Configuration | ✓ |
+| Using Custom Docker Image | ✓ |
+| Autoscaling (vertically) | ✓ |
## Lifecycle of Elasticsearch Object
@@ -60,31 +59,36 @@ KubeDB supports `Elasticsearch` provided by Elastic with `xpack` auth plugin. `O
diff --git a/docs/guides/mongodb/README.md b/docs/guides/mongodb/README.md
index 1d6e0d1f2f..7577452979 100644
--- a/docs/guides/mongodb/README.md
+++ b/docs/guides/mongodb/README.md
@@ -18,29 +18,29 @@ aliases:
## Supported MongoDB Features
-| Features | Community | Enterprise |
-|------------------------------------------------------------------------------------|:---------:|:----------:|
-| Clustering - Sharding | ✓ | ✓ |
-| Clustering - Replication | ✓ | ✓ |
-| Custom Configuration | ✓ | ✓ |
-| Using Custom Docker Image | ✓ | ✓ |
-| Initialization From Script (\*.js and/or \*.sh) | ✓ | ✓ |
-| Initializing from Snapshot ( [Stash](https://stash.run/) ) | ✓ | ✓ |
-| Authentication & Autorization | ✓ | ✓ |
-| Arbiter support | ✓ | ✓ |
-| Persistent Volume | ✓ | ✓ |
-| Instant Backup | ✓ | ✓ |
-| Scheduled Backup | ✓ | ✓ |
-| Builtin Prometheus Discovery | ✓ | ✓ |
-| Using Prometheus operator | ✓ | ✓ |
-| Automated Version Update | ✗ | ✓ |
-| Automatic Vertical Scaling | ✗ | ✓ |
-| Automated Horizontal Scaling | ✗ | ✓ |
-| Automated db-configure Reconfiguration | ✗ | ✓ |
-| TLS: Add, Remove, Update, Rotate ( [Cert Manager](https://cert-manager.io/docs/) ) | ✗ | ✓ |
-| Automated Reprovision | ✗ | ✓ |
-| Automated Volume Expansion | ✗ | ✓ |
-| Autoscaling (vertically) | ✗ | ✓ |
+| Features | Availability |
+|------------------------------------------------------------------------------------|:------------:|
+| Clustering - Sharding | ✓ |
+| Clustering - Replication | ✓ |
+| Custom Configuration | ✓ |
+| Using Custom Docker Image | ✓ |
+| Initialization From Script (\*.js and/or \*.sh) | ✓ |
+| Initializing from Snapshot ( [Stash](https://stash.run/) ) | ✓ |
+| Authentication & Autorization | ✓ |
+| Arbiter support | ✓ |
+| Persistent Volume | ✓ |
+| Instant Backup | ✓ |
+| Scheduled Backup | ✓ |
+| Builtin Prometheus Discovery | ✓ |
+| Using Prometheus operator | ✓ |
+| Automated Version Update | ✓ |
+| Automatic Vertical Scaling | ✓ |
+| Automated Horizontal Scaling | ✓ |
+| Automated db-configure Reconfiguration | ✓ |
+| TLS: Add, Remove, Update, Rotate ( [Cert Manager](https://cert-manager.io/docs/) ) | ✓ |
+| Automated Reprovision | ✓ |
+| Automated Volume Expansion | ✓ |
+| Autoscaling (vertically) | ✓ |
## Life Cycle of a MongoDB Object
diff --git a/docs/guides/mongodb/backup/kubestash/auto-backup/examples/backupblueprint.yaml b/docs/guides/mongodb/backup/kubestash/auto-backup/examples/backupblueprint.yaml
index dc02d18415..22bf8d23d2 100644
--- a/docs/guides/mongodb/backup/kubestash/auto-backup/examples/backupblueprint.yaml
+++ b/docs/guides/mongodb/backup/kubestash/auto-backup/examples/backupblueprint.yaml
@@ -33,4 +33,4 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackup
\ No newline at end of file
+ - name: logical-backup
\ No newline at end of file
diff --git a/docs/guides/mongodb/backup/kubestash/auto-backup/index.md b/docs/guides/mongodb/backup/kubestash/auto-backup/index.md
index 519fb1e456..2a5fede487 100644
--- a/docs/guides/mongodb/backup/kubestash/auto-backup/index.md
+++ b/docs/guides/mongodb/backup/kubestash/auto-backup/index.md
@@ -169,7 +169,7 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackup
+ - name: logical-backup
```
Here, we define a template for `BackupConfiguration`. Notice the `backends` and `sessions` fields of `backupConfigurationTemplate` section. We have used some variables in form of `${VARIABLE_NAME}`. KubeStash will automatically resolve those variables from the database annotations information to make `BackupConfiguration` according to that databases need.
@@ -269,7 +269,7 @@ spec:
- addon:
name: mongodb-addon
tasks:
- - name: LogicalBackup
+ - name: logical-backup
failurePolicy: Fail
name: frequent
repositories:
diff --git a/docs/guides/mongodb/backup/kubestash/customization/examples/backup/passing-args.yaml b/docs/guides/mongodb/backup/kubestash/customization/examples/backup/passing-args.yaml
index c947f7428d..32271cbb29 100644
--- a/docs/guides/mongodb/backup/kubestash/customization/examples/backup/passing-args.yaml
+++ b/docs/guides/mongodb/backup/kubestash/customization/examples/backup/passing-args.yaml
@@ -33,6 +33,6 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackup
+ - name: logical-backup
params:
args: "--db=testdb"
\ No newline at end of file
diff --git a/docs/guides/mongodb/backup/kubestash/customization/examples/backup/resource-limit.yaml b/docs/guides/mongodb/backup/kubestash/customization/examples/backup/resource-limit.yaml
index 9cb2794841..35f4a18b0d 100644
--- a/docs/guides/mongodb/backup/kubestash/customization/examples/backup/resource-limit.yaml
+++ b/docs/guides/mongodb/backup/kubestash/customization/examples/backup/resource-limit.yaml
@@ -33,7 +33,7 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackup
+ - name: logical-backup
containerRuntimeSettings:
resources:
requests:
diff --git a/docs/guides/mongodb/backup/kubestash/customization/examples/backup/specific-user.yaml b/docs/guides/mongodb/backup/kubestash/customization/examples/backup/specific-user.yaml
index d3dcc848c5..f3359271ba 100644
--- a/docs/guides/mongodb/backup/kubestash/customization/examples/backup/specific-user.yaml
+++ b/docs/guides/mongodb/backup/kubestash/customization/examples/backup/specific-user.yaml
@@ -33,7 +33,7 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackup
+ - name: logical-backup
jobTemplate:
spec:
securityContext:
diff --git a/docs/guides/mongodb/backup/kubestash/customization/examples/restore/restore.yaml b/docs/guides/mongodb/backup/kubestash/customization/examples/restore/restore.yaml
index 786b98395c..041bf890ad 100644
--- a/docs/guides/mongodb/backup/kubestash/customization/examples/restore/restore.yaml
+++ b/docs/guides/mongodb/backup/kubestash/customization/examples/restore/restore.yaml
@@ -18,7 +18,7 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackupRestoress
+ - name: logical-backup-restore
jobTemplate:
spec:
securityContext:
diff --git a/docs/guides/mongodb/backup/kubestash/customization/index.md b/docs/guides/mongodb/backup/kubestash/customization/index.md
index fecda2249c..15179de9e8 100644
--- a/docs/guides/mongodb/backup/kubestash/customization/index.md
+++ b/docs/guides/mongodb/backup/kubestash/customization/index.md
@@ -61,7 +61,7 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackup
+ - name: logical-backup
params:
args: "--db=testdb"
```
@@ -108,7 +108,7 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackup
+ - name: logical-backup
jobTemplate:
spec:
securityContext:
@@ -156,7 +156,7 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackup
+ - name: logical-backup
containerRuntimeSettings:
resources:
requests:
@@ -196,7 +196,7 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackupRestoress
+ - name: logical-backup-restore
params:
args: "--db=testdb"
```
@@ -235,7 +235,7 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackupRestoress
+ - name: logical-backup-restore
```
>Please, do not specify multiple snapshots here. Each snapshot represents a complete backup of your database. Multiple snapshots are only usable during file/directory restore.
@@ -265,7 +265,7 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackupRestoress
+ - name: logical-backup-restore
jobTemplate:
spec:
securityContext:
@@ -298,7 +298,7 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackupRestoress
+ - name: logical-backup-restore
containerRuntimeSettings:
resources:
requests:
diff --git a/docs/guides/mongodb/backup/kubestash/logical/replicaset/examples/backupconfiguration-replicaset.yaml b/docs/guides/mongodb/backup/kubestash/logical/replicaset/examples/backupconfiguration-replicaset.yaml
index aa9ada7a11..86a7bb5089 100644
--- a/docs/guides/mongodb/backup/kubestash/logical/replicaset/examples/backupconfiguration-replicaset.yaml
+++ b/docs/guides/mongodb/backup/kubestash/logical/replicaset/examples/backupconfiguration-replicaset.yaml
@@ -33,4 +33,4 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackup
\ No newline at end of file
+ - name: logical-backup
\ No newline at end of file
diff --git a/docs/guides/mongodb/backup/kubestash/logical/replicaset/examples/restoresession-replicaset.yaml b/docs/guides/mongodb/backup/kubestash/logical/replicaset/examples/restoresession-replicaset.yaml
index 56920b806c..d814bde882 100644
--- a/docs/guides/mongodb/backup/kubestash/logical/replicaset/examples/restoresession-replicaset.yaml
+++ b/docs/guides/mongodb/backup/kubestash/logical/replicaset/examples/restoresession-replicaset.yaml
@@ -18,4 +18,4 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackupRestore
\ No newline at end of file
+ - name: logical-backup-restore
\ No newline at end of file
diff --git a/docs/guides/mongodb/backup/kubestash/logical/replicaset/index.md b/docs/guides/mongodb/backup/kubestash/logical/replicaset/index.md
index 48546f1116..babd8cfa06 100644
--- a/docs/guides/mongodb/backup/kubestash/logical/replicaset/index.md
+++ b/docs/guides/mongodb/backup/kubestash/logical/replicaset/index.md
@@ -300,7 +300,7 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackup
+ - name: logical-backup
```
Here,
@@ -508,7 +508,7 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackupRestore
+ - name: logical-backup-restore
```
Here,
diff --git a/docs/guides/mongodb/backup/kubestash/logical/sharding/examples/backupconfiguration-sharding.yaml b/docs/guides/mongodb/backup/kubestash/logical/sharding/examples/backupconfiguration-sharding.yaml
index 27389a4bcc..4c65fa04fb 100644
--- a/docs/guides/mongodb/backup/kubestash/logical/sharding/examples/backupconfiguration-sharding.yaml
+++ b/docs/guides/mongodb/backup/kubestash/logical/sharding/examples/backupconfiguration-sharding.yaml
@@ -33,4 +33,4 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackup
\ No newline at end of file
+ - name: logical-backup
\ No newline at end of file
diff --git a/docs/guides/mongodb/backup/kubestash/logical/sharding/examples/restoresession-sharding.yaml b/docs/guides/mongodb/backup/kubestash/logical/sharding/examples/restoresession-sharding.yaml
index d0d8614d7d..5d01e36384 100644
--- a/docs/guides/mongodb/backup/kubestash/logical/sharding/examples/restoresession-sharding.yaml
+++ b/docs/guides/mongodb/backup/kubestash/logical/sharding/examples/restoresession-sharding.yaml
@@ -18,4 +18,4 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackupRestore
\ No newline at end of file
+ - name: logical-backup-restore
\ No newline at end of file
diff --git a/docs/guides/mongodb/backup/kubestash/logical/sharding/index.md b/docs/guides/mongodb/backup/kubestash/logical/sharding/index.md
index 3c6c931cee..06477237d3 100644
--- a/docs/guides/mongodb/backup/kubestash/logical/sharding/index.md
+++ b/docs/guides/mongodb/backup/kubestash/logical/sharding/index.md
@@ -308,7 +308,7 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackup
+ - name: logical-backup
```
Here,
@@ -523,7 +523,7 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackupRestore
+ - name: logical-backup-restore
```
Here,
diff --git a/docs/guides/mongodb/backup/kubestash/logical/standalone/examples/backupconfiguration.yaml b/docs/guides/mongodb/backup/kubestash/logical/standalone/examples/backupconfiguration.yaml
index f4b7178ccb..395a8b05ba 100644
--- a/docs/guides/mongodb/backup/kubestash/logical/standalone/examples/backupconfiguration.yaml
+++ b/docs/guides/mongodb/backup/kubestash/logical/standalone/examples/backupconfiguration.yaml
@@ -33,4 +33,4 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackup
\ No newline at end of file
+ - name: logical-backup
\ No newline at end of file
diff --git a/docs/guides/mongodb/backup/kubestash/logical/standalone/examples/restoresession.yaml b/docs/guides/mongodb/backup/kubestash/logical/standalone/examples/restoresession.yaml
index 1020635be4..decae59096 100644
--- a/docs/guides/mongodb/backup/kubestash/logical/standalone/examples/restoresession.yaml
+++ b/docs/guides/mongodb/backup/kubestash/logical/standalone/examples/restoresession.yaml
@@ -18,4 +18,4 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackupRestore
\ No newline at end of file
+ - name: logical-backup-restore
\ No newline at end of file
diff --git a/docs/guides/mongodb/backup/kubestash/logical/standalone/index.md b/docs/guides/mongodb/backup/kubestash/logical/standalone/index.md
index a0390db610..7f055465b3 100644
--- a/docs/guides/mongodb/backup/kubestash/logical/standalone/index.md
+++ b/docs/guides/mongodb/backup/kubestash/logical/standalone/index.md
@@ -294,7 +294,7 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackup
+ - name: logical-backup
```
Here,
@@ -501,7 +501,7 @@ spec:
addon:
name: mongodb-addon
tasks:
- - name: LogicalBackupRestore
+ - name: logical-backup-restore
```
Here,
diff --git a/docs/guides/mssqlserver/autoscaler/_index.md b/docs/guides/mssqlserver/autoscaler/_index.md
new file mode 100644
index 0000000000..b6fdc08188
--- /dev/null
+++ b/docs/guides/mssqlserver/autoscaler/_index.md
@@ -0,0 +1,10 @@
+---
+title: Autoscaling
+menu:
+ docs_{{ .version }}:
+ identifier: ms-autoscaling
+ name: Autoscaling
+ parent: guides-mssqlserver
+ weight: 46
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/mssqlserver/autoscaler/compute/_index.md b/docs/guides/mssqlserver/autoscaler/compute/_index.md
new file mode 100644
index 0000000000..cea75aa0c6
--- /dev/null
+++ b/docs/guides/mssqlserver/autoscaler/compute/_index.md
@@ -0,0 +1,10 @@
+---
+title: Compute Autoscaling
+menu:
+ docs_{{ .version }}:
+ identifier: ms-compute-autoscaling
+ name: Compute Autoscaling
+ parent: ms-autoscaling
+ weight: 10
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/mssqlserver/autoscaler/compute/cluster.md b/docs/guides/mssqlserver/autoscaler/compute/cluster.md
new file mode 100644
index 0000000000..bfc19f8c4f
--- /dev/null
+++ b/docs/guides/mssqlserver/autoscaler/compute/cluster.md
@@ -0,0 +1,602 @@
+---
+title: MSSQLServer Availability Group Cluster Autoscaling
+menu:
+ docs_{{ .version }}:
+ identifier: ms-autoscaling-cluster
+ name: Cluster
+ parent: ms-compute-autoscaling
+ weight: 20
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Autoscaling the Compute Resource of a MSSQLServer Availability Group Cluster Database
+
+This guide will show you how to use `KubeDB` to auto-scale compute resources i.e. cpu and memory of a MSSQLServer cluster database.
+
+## Before You Begin
+
+
+- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). Make sure install with helm command including `--set global.featureGates.MSSQLServer=true` to ensure MSSQLServer CRD installation.
+
+- To configure TLS/SSL in `MSSQLServer`, `KubeDB` uses `cert-manager` to issue certificates. So first you have to make sure that the cluster has `cert-manager` installed. To install `cert-manager` in your cluster following steps [here](https://cert-manager.io/docs/installation/kubernetes/).
+
+- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation)
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [MSSQLServer](/docs/guides/mssqlserver/concepts/mssqlserver.md)
+ - [MSSQLServerOpsRequest](/docs/guides/mssqlserver/concepts/opsrequest.md)
+ - [Compute Resource Autoscaling Overview](/docs/guides/mssqlserver/autoscaler/compute/overview.md)
+ - [MSSQLServerAutoscaler](/docs/guides/mssqlserver/concepts/autoscaler.md)
+
+To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial.
+
+```bash
+$ kubectl create ns demo
+namespace/demo created
+```
+## Autoscaling of MSSQLServer Availability Group Cluster
+
+Here, we are going to deploy a `MSSQLServer` Availability Group Cluster using a supported version by `KubeDB` operator. Then we are going to apply `MSSQLServerAutoscaler` to set up autoscaling.
+
+#### Deploy MSSQLServer Availability Group Cluster
+
+First, an issuer needs to be created, even if TLS is not enabled for SQL Server. The issuer will be used to configure the TLS-enabled Wal-G proxy server, which is required for the SQL Server backup and restore operations.
+
+### Create Issuer/ClusterIssuer
+
+Now, we are going to create an example `Issuer` that will be used throughout the duration of this tutorial. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. By following the below steps, we are going to create our desired issuer,
+
+- Start off by generating our ca-certificates using openssl,
+```bash
+openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=MSSQLServer/O=kubedb"
+```
+- Create a secret using the certificate files we have just generated,
+```bash
+$ kubectl create secret tls mssqlserver-ca --cert=ca.crt --key=ca.key --namespace=demo
+secret/mssqlserver-ca created
+```
+Now, we are going to create an `Issuer` using the `mssqlserver-ca` secret that contains the ca-certificate we have just created. Below is the YAML of the `Issuer` CR that we are going to create,
+
+```yaml
+apiVersion: cert-manager.io/v1
+kind: Issuer
+metadata:
+ name: mssqlserver-ca-issuer
+ namespace: demo
+spec:
+ ca:
+ secretName: mssqlserver-ca
+```
+
+Let’s create the `Issuer` CR we have shown above,
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mssqlserver/ag-cluster/mssqlserver-ca-issuer.yaml
+issuer.cert-manager.io/mssqlserver-ca-issuer created
+```
+
+In this section, we are going to deploy a MSSQLServer Availability Group Cluster with version `2022-cu12`. Then, in the next section we will set up autoscaling for this database using `MSSQLServerAutoscaler` CRD. Below is the YAML of the `MSSQLServer` CR that we are going to create,
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: MSSQLServer
+metadata:
+ name: mssqlserver-ag-cluster
+ namespace: demo
+spec:
+ version: "2022-cu12"
+ replicas: 3
+ topology:
+ mode: AvailabilityGroup
+ availabilityGroup:
+ databases:
+ - agdb1
+ - agdb2
+ tls:
+ issuerRef:
+ name: mssqlserver-ca-issuer
+ kind: Issuer
+ apiGroup: "cert-manager.io"
+ clientTLS: false
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
+ resources:
+ requests:
+ cpu: "500m"
+ memory: "1.5Gi"
+ limits:
+ cpu: "600m"
+ memory: "1.6Gi"
+ storageType: Durable
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ deletionPolicy: WipeOut
+```
+
+Let's create the `MSSQLServer` CRO we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mssqlserver/autoscaler/compute/mssqlserver-ag-cluster.yaml
+mssqlserver.kubedb.com/mssqlserver-ag-cluster created
+```
+
+Now, wait until `mssqlserver-ag-cluster` has status `Ready`. i.e,
+
+```bash
+$ kubectl get mssqlserver -n demo
+NAME VERSION STATUS AGE
+mssqlserver-ag-cluster 2022-cu12 Ready 8m27s
+```
+
+Let's check the MSSQLServer resources,
+```bash
+$ kubectl get ms -n demo mssqlserver-ag-cluster -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "mssql") | .resources'
+{
+ "limits": {
+ "cpu": "600m",
+ "memory": "1717986918400m"
+ },
+ "requests": {
+ "cpu": "500m",
+ "memory": "1536Mi"
+ }
+}
+```
+
+
+Let's check the Pod containers resources, there are two containers here, first one with index 0 named `mssql` is the main container of mssqlserver.
+
+```bash
+$ kubectl get pod -n demo mssqlserver-ag-cluster-0 -o json | jq '.spec.containers[0].resources'
+{
+ "limits": {
+ "cpu": "600m",
+ "memory": "1717986918400m"
+ },
+ "requests": {
+ "cpu": "500m",
+ "memory": "1536Mi"
+ }
+}
+$ kubectl get pod -n demo mssqlserver-ag-cluster-1 -o json | jq '.spec.containers[0].resources'
+{
+ "limits": {
+ "cpu": "600m",
+ "memory": "1717986918400m"
+ },
+ "requests": {
+ "cpu": "500m",
+ "memory": "1536Mi"
+ }
+}
+$ kubectl get pod -n demo mssqlserver-ag-cluster-2 -o json | jq '.spec.containers[0].resources'
+{
+ "limits": {
+ "cpu": "600m",
+ "memory": "1717986918400m"
+ },
+ "requests": {
+ "cpu": "500m",
+ "memory": "1536Mi"
+ }
+}
+```
+
+
+You can see from the above outputs that the resources are same as the one we have assigned while deploying the mssqlserver.
+
+We are now ready to apply the `MSSQLServerAutoscaler` CRO to set up autoscaling for this database.
+
+### Compute Resource Autoscaling
+
+Here, we are going to set up compute resource autoscaling using a `MSSQLServerAutoscaler` Object.
+
+#### Create MSSQLServerAutoscaler Object
+
+In order to set up compute resource autoscaling for this database cluster, we have to create a `MSSQLServerAutoscaler` CRO with our desired configuration. Below is the YAML of the `MSSQLServerAutoscaler` object that we are going to create,
+
+```yaml
+apiVersion: autoscaling.kubedb.com/v1alpha1
+kind: MSSQLServerAutoscaler
+metadata:
+ name: ms-as-compute
+ namespace: demo
+spec:
+ databaseRef:
+ name: mssqlserver-ag-cluster
+ opsRequestOptions:
+ timeout: 5m
+ apply: IfReady
+ compute:
+ mssqlserver:
+ trigger: "On"
+ podLifeTimeThreshold: 5m
+ resourceDiffPercentage: 10
+ minAllowed:
+ cpu: 800m
+ memory: 2Gi
+ maxAllowed:
+ cpu: 1
+ memory: 3Gi
+ containerControlledValues: "RequestsAndLimits"
+ controlledResources: ["cpu", "memory"]
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing compute resource scaling operation on `mssqlserver-ag-cluster` database.
+- `spec.compute.mssqlserver.trigger` specifies that compute autoscaling is enabled for this database.
+- `spec.compute.mssqlserver.podLifeTimeThreshold` specifies the minimum lifetime for at least one of the pod to initiate a vertical scaling.
+- `spec.compute.mssqlserver.resourceDiffPercentage` specifies the minimum resource difference in percentage. The default is 10%.
+ If the difference between current & recommended resource is less than ResourceDiffPercentage, Autoscaler Operator will ignore the updating.
+- `spec.compute.mssqlserver.minAllowed` specifies the minimum allowed resources for the database.
+- `spec.compute.mssqlserver.maxAllowed` specifies the maximum allowed resources for the database.
+- `spec.compute.mssqlserver.controlledResources` specifies the resources that are controlled by the autoscaler.
+- `spec.compute.mssqlserver.containerControlledValues` specifies which resource values should be controlled. The default is "RequestsAndLimits".
+- `spec.opsRequestOptions.apply` has two supported value : `IfReady` & `Always`.
+ Use `IfReady` if you want to process the opsReq only when the database is Ready. And use `Always` if you want to process the execution of opsReq irrespective of the Database state.
+- `spec.opsRequestOptions.timeout` specifies the maximum time for each step of the opsRequest(in seconds).
+ If a step doesn't finish within the specified timeout, the ops request will result in failure.
+
+
+Let's create the `MSSQLServerAutoscaler` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mssqlserver/autoscaler/compute/ms-as-compute.yaml
+mssqlserverautoscaler.autoscaling.kubedb.com/ms-as-compute created
+```
+
+#### Verify Autoscaling is set up successfully
+
+Let's check that the `mssqlserverautoscaler` resource is created successfully,
+
+```bash
+$ kubectl get mssqlserverautoscaler -n demo
+NAME AGE
+ms-as-compute 16s
+
+$ kubectl describe mssqlserverautoscaler ms-as-compute -n demo
+Name: ms-as-compute
+Namespace: demo
+Labels:
+Annotations:
+API Version: autoscaling.kubedb.com/v1alpha1
+Kind: MSSQLServerAutoscaler
+Metadata:
+ Creation Timestamp: 2024-10-25T15:02:58Z
+ Generation: 1
+ Resource Version: 106200
+ UID: cc34737b-2e42-4b94-bcc4-cfcac98eb6a6
+Spec:
+ Compute:
+ Mssqlserver:
+ Container Controlled Values: RequestsAndLimits
+ Controlled Resources:
+ cpu
+ memory
+ Max Allowed:
+ Cpu: 1
+ Memory: 3Gi
+ Min Allowed:
+ Cpu: 800m
+ Memory: 2Gi
+ Pod Life Time Threshold: 5m
+ Resource Diff Percentage: 10
+ Trigger: On
+ Database Ref:
+ Name: mssqlserver-ag-cluster
+ Ops Request Options:
+ Apply: IfReady
+ Timeout: 5m
+Status:
+ Checkpoints:
+ Cpu Histogram:
+ Bucket Weights:
+ Index: 0
+ Weight: 524
+ Index: 20
+ Weight: 456
+ Index: 28
+ Weight: 2635
+ Index: 34
+ Weight: 455
+ Index: 35
+ Weight: 10000
+ Index: 36
+ Weight: 6980
+ Reference Timestamp: 2024-10-25T15:10:00Z
+ Total Weight: 2.465794209092962
+ First Sample Start: 2024-10-25T15:03:11Z
+ Last Sample Start: 2024-10-25T15:13:21Z
+ Last Update Time: 2024-10-25T15:13:34Z
+ Memory Histogram:
+ Bucket Weights:
+ Index: 36
+ Weight: 10000
+ Index: 37
+ Weight: 5023
+ Index: 39
+ Weight: 5710
+ Index: 40
+ Weight: 2918
+ Reference Timestamp: 2024-10-25T15:15:00Z
+ Total Weight: 2.8324869288693995
+ Ref:
+ Container Name: mssql
+ Vpa Object Name: mssqlserver-ag-cluster
+ Total Samples Count: 30
+ Version: v3
+ Cpu Histogram:
+ Bucket Weights:
+ Index: 0
+ Weight: 10000
+ Index: 1
+ Weight: 3741
+ Index: 2
+ Weight: 1924
+ Reference Timestamp: 2024-10-25T15:10:00Z
+ Total Weight: 2.033798492571757
+ First Sample Start: 2024-10-25T15:03:11Z
+ Last Sample Start: 2024-10-25T15:12:22Z
+ Last Update Time: 2024-10-25T15:12:34Z
+ Memory Histogram:
+ Bucket Weights:
+ Index: 3
+ Weight: 1357
+ Index: 4
+ Weight: 10000
+ Reference Timestamp: 2024-10-25T15:15:00Z
+ Total Weight: 2.8324869288693995
+ Ref:
+ Container Name: mssql-coordinator
+ Vpa Object Name: mssqlserver-ag-cluster
+ Total Samples Count: 26
+ Version: v3
+ Conditions:
+ Last Transition Time: 2024-10-25T15:10:27Z
+ Message: Successfully created MSSQLServerOpsRequest demo/msops-mssqlserver-ag-cluster-v5xep9
+ Observed Generation: 1
+ Reason: CreateOpsRequest
+ Status: True
+ Type: CreateOpsRequest
+ Vpas:
+ Conditions:
+ Last Transition Time: 2024-10-25T15:03:34Z
+ Status: True
+ Type: RecommendationProvided
+ Recommendation:
+ Container Recommendations:
+ Container Name: mssql
+ Lower Bound:
+ Cpu: 844m
+ Memory: 2Gi
+ Target:
+ Cpu: 1
+ Memory: 2Gi
+ Uncapped Target:
+ Cpu: 1168m
+ Memory: 1389197403
+ Upper Bound:
+ Cpu: 1
+ Memory: 3Gi
+ Container Name: mssql-coordinator
+ Lower Bound:
+ Cpu: 50m
+ Memory: 131072k
+ Target:
+ Cpu: 50m
+ Memory: 131072k
+ Uncapped Target:
+ Cpu: 50m
+ Memory: 131072k
+ Upper Bound:
+ Cpu: 4992m
+ Memory: 9063982612
+ Vpa Name: mssqlserver-ag-cluster
+Events:
+```
+So, the `mssqlserverautoscaler` resource is created successfully.
+
+We can verify from the above output that `status.vpas` contains the `RecommendationProvided` condition to true. And in the same time, `status.vpas.recommendation.containerRecommendations` contain the actual generated recommendation.
+
+Our autoscaler operator continuously watches the recommendation generated and creates an `mssqlserveropsrequest` based on the recommendations, if the database pod resources are needed to scaled up or down.
+
+Let's watch the `mssqlserveropsrequest` in the demo namespace to see if any `mssqlserveropsrequest` object is created. After some time you'll see that a `mssqlserveropsrequest` will be created based on the recommendation.
+
+```bash
+$ kubectl get mssqlserveropsrequest -n demo
+NAME TYPE STATUS AGE
+msops-mssqlserver-ag-cluster-6xc1kc VerticalScaling Progressing 7s
+```
+
+Let's wait for the ops request to become successful.
+
+```bash
+$ kubectl get mssqlserveropsrequest -n demo
+NAME TYPE STATUS AGE
+msops-mssqlserver-ag-cluster-8li26q VerticalScaling Successful 11m
+```
+
+We can see from the above output that the `MSSQLServerOpsRequest` has succeeded. If we describe the `MSSQLServerOpsRequest` we will get an overview of the steps that were followed to scale the database.
+
+```bash
+$ kubectl describe msops -n demo msops-mssqlserver-ag-cluster-8li26q
+Name: msops-mssqlserver-ag-cluster-8li26q
+Namespace: demo
+Labels: app.kubernetes.io/component=database
+ app.kubernetes.io/instance=mssqlserver-ag-cluster
+ app.kubernetes.io/managed-by=kubedb.com
+ app.kubernetes.io/name=mssqlservers.kubedb.com
+Annotations:
+API Version: ops.kubedb.com/v1alpha1
+Kind: MSSQLServerOpsRequest
+Metadata:
+ Creation Timestamp: 2024-10-25T15:04:27Z
+ Generation: 1
+ Owner References:
+ API Version: autoscaling.kubedb.com/v1alpha1
+ Block Owner Deletion: true
+ Controller: true
+ Kind: MSSQLServerAutoscaler
+ Name: ms-as-compute
+ UID: cc34737b-2e42-4b94-bcc4-cfcac98eb6a6
+ Resource Version: 105300
+ UID: b2f29a6a-f4cf-4c97-871c-f203e08af320
+Spec:
+ Apply: IfReady
+ Database Ref:
+ Name: mssqlserver-ag-cluster
+ Timeout: 5m0s
+ Type: VerticalScaling
+ Vertical Scaling:
+ Mssqlserver:
+ Resources:
+ Limits:
+ Cpu: 960m
+ Memory: 2290649225
+ Requests:
+ Cpu: 800m
+ Memory: 2Gi
+Status:
+ Conditions:
+ Last Transition Time: 2024-10-25T15:04:27Z
+ Message: MSSQLServer ops-request has started to vertically scaling the MSSQLServer nodes
+ Observed Generation: 1
+ Reason: VerticalScaling
+ Status: True
+ Type: VerticalScaling
+ Last Transition Time: 2024-10-25T15:04:30Z
+ Message: Successfully paused database
+ Observed Generation: 1
+ Reason: DatabasePauseSucceeded
+ Status: True
+ Type: DatabasePauseSucceeded
+ Last Transition Time: 2024-10-25T15:04:30Z
+ Message: Successfully updated PetSets Resources
+ Observed Generation: 1
+ Reason: UpdatePetSets
+ Status: True
+ Type: UpdatePetSets
+ Last Transition Time: 2024-10-25T15:04:35Z
+ Message: get pod; ConditionStatus:True; PodName:mssqlserver-ag-cluster-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--mssqlserver-ag-cluster-0
+ Last Transition Time: 2024-10-25T15:04:35Z
+ Message: evict pod; ConditionStatus:True; PodName:mssqlserver-ag-cluster-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--mssqlserver-ag-cluster-0
+ Last Transition Time: 2024-10-25T15:05:15Z
+ Message: check pod running; ConditionStatus:True; PodName:mssqlserver-ag-cluster-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--mssqlserver-ag-cluster-0
+ Last Transition Time: 2024-10-25T15:05:20Z
+ Message: get pod; ConditionStatus:True; PodName:mssqlserver-ag-cluster-1
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--mssqlserver-ag-cluster-1
+ Last Transition Time: 2024-10-25T15:05:20Z
+ Message: evict pod; ConditionStatus:True; PodName:mssqlserver-ag-cluster-1
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--mssqlserver-ag-cluster-1
+ Last Transition Time: 2024-10-25T15:05:55Z
+ Message: check pod running; ConditionStatus:True; PodName:mssqlserver-ag-cluster-1
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--mssqlserver-ag-cluster-1
+ Last Transition Time: 2024-10-25T15:06:00Z
+ Message: get pod; ConditionStatus:True; PodName:mssqlserver-ag-cluster-2
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--mssqlserver-ag-cluster-2
+ Last Transition Time: 2024-10-25T15:06:00Z
+ Message: evict pod; ConditionStatus:True; PodName:mssqlserver-ag-cluster-2
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--mssqlserver-ag-cluster-2
+ Last Transition Time: 2024-10-25T15:06:35Z
+ Message: check pod running; ConditionStatus:True; PodName:mssqlserver-ag-cluster-2
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--mssqlserver-ag-cluster-2
+ Last Transition Time: 2024-10-25T15:06:40Z
+ Message: Successfully Restarted Pods With Resources
+ Observed Generation: 1
+ Reason: RestartPods
+ Status: True
+ Type: RestartPods
+ Last Transition Time: 2024-10-25T15:06:40Z
+ Message: Successfully completed the VerticalScaling for MSSQLServer
+ Observed Generation: 1
+ Reason: Successful
+ Status: True
+ Type: Successful
+ Observed Generation: 1
+ Phase: Successful
+```
+
+Now, we are going to verify from the Pod, and the MSSQLServer yaml whether the resources of the cluster database has updated to meet up the desired state, Let's check,
+
+```bash
+$ kubectl get pod -n demo mssqlserver-ag-cluster-0 -o json | jq '.spec.containers[0].resources'
+{
+ "limits": {
+ "cpu": "960m",
+ "memory": "2290649225"
+ },
+ "requests": {
+ "cpu": "800m",
+ "memory": "2Gi"
+ }
+}
+
+$ kubectl get ms -n demo mssqlserver-ag-cluster -o json | jq '.spec.podTemplate.spec.containers[] | select(.name == "mssql") | .resources'
+{
+ "limits": {
+ "cpu": "960m",
+ "memory": "2290649225"
+ },
+ "requests": {
+ "cpu": "800m",
+ "memory": "2Gi"
+ }
+}
+```
+
+
+The above output verifies that we have successfully autoscaled the resources of the MSSQLServer cluster.
+
+
+### Autoscaling for Standalone MSSQLServer
+Autoscaling for Standalone MSSQLServer is exactly same as cluster mode. Just refer the standalone mssqlserver in `databaseRef` field of `MSSQLServerAutoscaler` spec.
+
+## Cleaning Up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+```bash
+kubectl delete mssqlserver -n demo mssqlserver-ag-cluster
+kubectl delete mssqlserverautoscaler -n demo ms-as-compute
+kubectl delete issuer -n demo mssqlserver-ca-issuer
+kubectl delete secret -n demo mssqlserver-ca
+kubectl delete ns demo
+```
\ No newline at end of file
diff --git a/docs/guides/mssqlserver/autoscaler/compute/overview.md b/docs/guides/mssqlserver/autoscaler/compute/overview.md
new file mode 100644
index 0000000000..ac034a16bb
--- /dev/null
+++ b/docs/guides/mssqlserver/autoscaler/compute/overview.md
@@ -0,0 +1,55 @@
+---
+title: MSSQLServer Compute Autoscaling Overview
+menu:
+ docs_{{ .version }}:
+ identifier: ms-autoscaling-overview
+ name: Overview
+ parent: ms-compute-autoscaling
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# MSSQLServer Compute Resource Autoscaling
+
+This guide will give an overview on how KubeDB Autoscaler operator autoscales the database compute resources i.e. cpu and memory using `MSSQLServerAutoscaler` crd.
+
+## Before You Begin
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [MSSQLServer](/docs/guides/mssqlserver/concepts/mssqlserver.md)
+ - [MSSQLServerOpsRequest](/docs/guides/mssqlserver/concepts/opsrequest.md)
+ - [MSSQLServerAutoscaler](/docs/guides/mssqlserver/concepts/autoscaler.md)
+
+## How Compute Autoscaling Works
+
+The following diagram shows how KubeDB Autoscaler operator autoscales the resources of `MSSQLServer` database components. Open the image in a new tab to see the enlarged version.
+
+
+
+The Auto Scaling process consists of the following steps:
+
+1. At first, a user creates a `MSSQLServer` Custom Resource Object (CRO).
+
+2. `KubeDB` Provisioner operator watches the `MSSQLServer` CRO.
+
+3. When the operator finds a `MSSQLServer` CRO, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc.
+
+4. Then, in order to set up autoscaling of the `MSSQLServer` database the user creates a `MSSQLServerAutoscaler` CRO with desired configuration.
+
+5. `KubeDB` Autoscaler operator watches the `MSSQLServerAutoscaler` CRO.
+
+6. `KubeDB` Autoscaler operator generates recommendation using the modified version of kubernetes [official recommender](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler/pkg/recommender) for different components of the database, as specified in the `MSSQLServerAutoscaler` CRO.
+
+7. If the generated recommendation doesn't match the current resources of the database, then `KubeDB` Autoscaler operator creates a `MSSQLServerOpsRequest` CRO to scale the database to match the recommendation generated.
+
+8. `KubeDB` Ops-manager operator watches the `MSSQLServerOpsRequest` CRO.
+
+9. Then the `KubeDB` Ops-manager operator will scale the database component vertically as specified on the `MSSQLServerOpsRequest` CRO.
+
+In the next docs, we are going to show a step-by-step guide on Autoscaling of various MSSQLServer database using `MSSQLServerAutoscaler` CRD.
diff --git a/docs/guides/mssqlserver/autoscaler/storage/_index.md b/docs/guides/mssqlserver/autoscaler/storage/_index.md
new file mode 100644
index 0000000000..58a3cd83a4
--- /dev/null
+++ b/docs/guides/mssqlserver/autoscaler/storage/_index.md
@@ -0,0 +1,10 @@
+---
+title: Storage Autoscaling
+menu:
+ docs_{{ .version }}:
+ identifier: ms-storage-autoscaling
+ name: Storage Autoscaling
+ parent: ms-autoscaling
+ weight: 20
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/mssqlserver/autoscaler/storage/cluster.md b/docs/guides/mssqlserver/autoscaler/storage/cluster.md
new file mode 100644
index 0000000000..b48551aa4f
--- /dev/null
+++ b/docs/guides/mssqlserver/autoscaler/storage/cluster.md
@@ -0,0 +1,452 @@
+---
+title: MSSQLServer Cluster Autoscaling
+menu:
+ docs_{{ .version }}:
+ identifier: ms-storage-autoscaling-cluster
+ name: Cluster
+ parent: ms-storage-autoscaling
+ weight: 20
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Storage Autoscaling of a MSSQLServer Availability Group Cluster
+
+This guide will show you how to use `KubeDB` to autoscale the storage of a MSSQLServer Availability Group Cluster.
+
+## Before You Begin
+
+- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). Make sure install with helm command including `--set global.featureGates.MSSQLServer=true` to ensure MSSQLServer CRD installation.
+
+- To configure TLS/SSL in `MSSQLServer`, `KubeDB` uses `cert-manager` to issue certificates. So first you have to make sure that the cluster has `cert-manager` installed. To install `cert-manager` in your cluster following steps [here](https://cert-manager.io/docs/installation/kubernetes/).
+
+- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation)
+
+- Install Prometheus from [here](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack)
+
+- You must have a `StorageClass` that supports volume expansion.
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [MSSQLServer](/docs/guides/mssqlserver/concepts/mssqlserver.md)
+ - [MSSQLServerOpsRequest](/docs/guides/mssqlserver/concepts/opsrequest.md)
+ - [MSSQLServerAutoscaler](/docs/guides/mssqlserver/concepts/autoscaler.md)
+ - [Storage Autoscaling Overview](/docs/guides/mssqlserver/autoscaler/storage/overview.md)
+
+To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial.
+
+```bash
+$ kubectl create ns demo
+namespace/demo created
+```
+
+## Storage Autoscaling MSSQLServer Cluster
+
+At first verify that your cluster has a storage class, that supports volume expansion. Let's check,
+
+```bash
+$ kubectl get storageclass
+NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
+local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 4d21h
+longhorn (default) driver.longhorn.io Delete Immediate true 2d20h
+longhorn-static driver.longhorn.io Delete Immediate true 2d20h
+```
+
+We can see from the output the `longhorn` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it.
+
+Now, we are going to deploy a `MSSQLServer` cluster using a supported version by `KubeDB` operator. Then we are going to apply `MSSQLServerAutoscaler` to set up autoscaling.
+
+#### Deploy MSSQLServer Cluster
+
+First, an issuer needs to be created, even if TLS is not enabled for SQL Server. The issuer will be used to configure the TLS-enabled Wal-G proxy server, which is required for the SQL Server backup and restore operations.
+
+### Create Issuer/ClusterIssuer
+
+Now, we are going to create an example `Issuer` that will be used throughout the duration of this tutorial. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. By following the below steps, we are going to create our desired issuer,
+
+- Start off by generating our ca-certificates using openssl,
+```bash
+openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=MSSQLServer/O=kubedb"
+```
+- Create a secret using the certificate files we have just generated,
+```bash
+$ kubectl create secret tls mssqlserver-ca --cert=ca.crt --key=ca.key --namespace=demo
+secret/mssqlserver-ca created
+```
+Now, we are going to create an `Issuer` using the `mssqlserver-ca` secret that contains the ca-certificate we have just created. Below is the YAML of the `Issuer` CR that we are going to create,
+
+```yaml
+apiVersion: cert-manager.io/v1
+kind: Issuer
+metadata:
+ name: mssqlserver-ca-issuer
+ namespace: demo
+spec:
+ ca:
+ secretName: mssqlserver-ca
+```
+
+Let’s create the `Issuer` CR we have shown above,
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mssqlserver/ag-cluster/mssqlserver-ca-issuer.yaml
+issuer.cert-manager.io/mssqlserver-ca-issuer created
+```
+
+Now, we are going to deploy a MSSQLServer cluster database with version `2022-cu12`. Then, in the next section we will set up autoscaling for this database using `MSSQLServerAutoscaler` CRD. Below is the YAML of the `MSSQLServer` CR that we are going to create,
+
+> If you want to autoscale MSSQLServer `Standalone`, Just deploy a [standalone](/docs/guides/mssqlserver/clustering/standalone.md) sql server instance using KubeDB.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: MSSQLServer
+metadata:
+ name: mssqlserver-ag-cluster
+ namespace: demo
+spec:
+ version: "2022-cu12"
+ replicas: 3
+ topology:
+ mode: AvailabilityGroup
+ availabilityGroup:
+ databases:
+ - agdb1
+ - agdb2
+ tls:
+ issuerRef:
+ name: mssqlserver-ca-issuer
+ kind: Issuer
+ apiGroup: "cert-manager.io"
+ clientTLS: false
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
+ resources:
+ requests:
+ cpu: "500m"
+ memory: "1.5Gi"
+ limits:
+ cpu: "600m"
+ memory: "1.6Gi"
+ storageType: Durable
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ deletionPolicy: WipeOut
+```
+
+Let's create the `MSSQLServer` CRO we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mssqlserver/autoscaler/storage/mssqlserver-ag-cluster.yaml
+mssqlserver.kubedb.com/mssqlserver-ag-cluster created
+```
+
+Now, wait until `mssqlserver-ag-cluster` has status `Ready`. i.e,
+
+```bash
+$ kubectl get mssqlserver -n demo
+NAME VERSION STATUS AGE
+mssqlserver-ag-cluster 2022-cu12 Ready 4m
+```
+
+Let's check volume size from petset, and from the persistent volume,
+
+```bash
+$ kubectl get petset -n demo mssqlserver-ag-cluster -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
+"1Gi"
+
+$ kubectl get pv -n demo
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
+pvc-1497dd6d-9cbd-467a-8e0c-c3963ce09e1b 1Gi RWO Delete Bound demo/data-mssqlserver-ag-cluster-1 longhorn 8m
+pvc-37a7bc8d-2c04-4eb4-8e53-e610fd1daaf5 1Gi RWO Delete Bound demo/data-mssqlserver-ag-cluster-0 longhorn 8m
+pvc-817866af-5277-4d51-8d81-434e8ec1c442 1Gi RWO Delete Bound demo/data-mssqlserver-ag-cluster-2 longhorn 8m
+```
+
+You can see the petset has 1GB storage, and the capacity of all the persistent volume is also 1GB.
+
+We are now ready to apply the `MSSQLServerAutoscaler` CRO to set up storage autoscaling for this database.
+
+### Storage Autoscaling
+
+Here, we are going to set up storage autoscaling using a `MSSQLServerAutoscaler` Object.
+
+#### Create MSSQLServerAutoscaler Object
+
+In order to set up storage autoscaling for this database cluster, we have to create a `MSSQLServerAutoscaler` CRO with our desired configuration. Below is the YAML of the `MSSQLServerAutoscaler` object that we are going to create,
+
+```yaml
+apiVersion: autoscaling.kubedb.com/v1alpha1
+kind: MSSQLServerAutoscaler
+metadata:
+ name: ms-as-storage
+ namespace: demo
+spec:
+ databaseRef:
+ name: mssqlserver-ag-cluster
+ storage:
+ mssqlserver:
+ trigger: "On"
+ usageThreshold: 60
+ scalingThreshold: 50
+ expansionMode: "Offline"
+ upperBound: "100Gi"
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing volume expansion operation on `mssqlserver-ag-cluster` database.
+- `spec.storage.mssqlserver.trigger` specifies that storage autoscaling is enabled for this database.
+- `spec.storage.mssqlserver.usageThreshold` specifies storage usage threshold, if storage usage exceeds `60%` then storage autoscaling will be triggered.
+- `spec.storage.mssqlserver.scalingThreshold` specifies the scaling threshold. Storage will be scaled to `50%` of the current amount.
+- `spec.storage.mssqlserver.expansionMode` specifies the expansion mode of volume expansion `MSSQLServerOpsRequest` created by `MSSQLServerAutoscaler`, `longhorn` supports offline volume expansion so here `expansionMode` is set as "Offline".
+
+Let's create the `MSSQLServerAutoscaler` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mssqlserver/autoscaler/storage/ms-as-storage.yaml
+mssqlserverautoscaler.autoscaling.kubedb.com/ms-as-storage created
+```
+
+#### Storage Autoscaling is set up successfully
+
+Let's check that the `mssqlserverautoscaler` resource is created successfully,
+
+```bash
+$ kubectl get mssqlserverautoscaler -n demo
+NAME AGE
+ms-as-storage 17s
+
+
+$ kubectl describe mssqlserverautoscaler ms-as-storage -n demo
+Name: ms-as-storage
+Namespace: demo
+Labels:
+Annotations:
+API Version: autoscaling.kubedb.com/v1alpha1
+Kind: MSSQLServerAutoscaler
+Metadata:
+ Creation Timestamp: 2024-11-01T09:39:54Z
+ Generation: 1
+ Resource Version: 922388
+ UID: 1e239b31-c6c8-4e2c-8cf6-2b95a88b9d45
+Spec:
+ Database Ref:
+ Name: mssqlserver-ag-cluster
+ Ops Request Options:
+ Apply: IfReady
+ Storage:
+ Mssqlserver:
+ Expansion Mode: Offline
+ Scaling Rules:
+ Applies Upto:
+ Threshold: 50pc
+ Scaling Threshold: 50
+ Trigger: On
+ Upper Bound: 100Gi
+ Usage Threshold: 60
+Events:
+```
+
+So, the `mssqlserverautoscaler` resource is created successfully.
+
+Now, for this demo, we are going to manually fill up the persistent volume to exceed the `usageThreshold` using `dd` command to see storage autoscaling.
+
+Lets exec into the database pod and fill the database volume(`/var/opt/mssql/`) using the following commands:
+
+```bash
+$ kubectl exec -it -n demo mssqlserver-ag-cluster-0 -c mssql -- bash
+mssql@mssqlserver-ag-cluster-0:/$ df -h /var/opt/mssql
+Filesystem Size Used Avail Use% Mounted on
+/dev/longhorn/pvc-37a7bc8d-2c04-4eb4-8e53-e610fd1daaf5 974M 274M 685M 29% /var/opt/mssql
+
+mssql@mssqlserver-ag-cluster-0:/$ dd if=/dev/zero of=/var/opt/mssql/file.img bs=120M count=5
+5+0 records in
+5+0 records out
+629145600 bytes (629 MB, 600 MiB) copied, 6.09315 s, 103 MB/s
+mssql@mssqlserver-ag-cluster-0:/$ df -h /var/opt/mssql
+Filesystem Size Used Avail Use% Mounted on
+/dev/longhorn/pvc-37a7bc8d-2c04-4eb4-8e53-e610fd1daaf5 974M 874M 85M 92% /var/opt/mssql
+```
+
+So, from the above output we can see that the storage usage is 92%, which exceeded the `usageThreshold` 60%.
+
+Let's watch the `mssqlserveropsrequest` in the demo namespace to see if any `mssqlserveropsrequest` object is created. After some time you'll see that a `mssqlserveropsrequest` of type `VolumeExpansion` will be created based on the `scalingThreshold`.
+
+
+```bash
+$ watch kubectl get mssqlserveropsrequest -n demo
+NAME TYPE STATUS AGE
+msops-mssqlserver-ag-cluster-8m7l5s VolumeExpansion Progressing 2m20s
+```
+
+Let's wait for the ops request to become successful.
+
+```bash
+$ kubectl get mssqlserveropsrequest -n demo
+NAME TYPE STATUS AGE
+msops-mssqlserver-ag-cluster-8m7l5s VolumeExpansion Successful 17m
+```
+
+We can see from the above output that the `MSSQLServerOpsRequest` has succeeded. If we describe the `MSSQLServerOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database.
+
+```bash
+$ kubectl describe mssqlserveropsrequest -n demo msops-mssqlserver-ag-cluster-8m7l5s
+Name: msops-mssqlserver-ag-cluster-8m7l5s
+Namespace: demo
+Labels: app.kubernetes.io/component=database
+ app.kubernetes.io/instance=mssqlserver-ag-cluster
+ app.kubernetes.io/managed-by=kubedb.com
+ app.kubernetes.io/name=mssqlservers.kubedb.com
+Annotations:
+API Version: ops.kubedb.com/v1alpha1
+Kind: MSSQLServerOpsRequest
+Metadata:
+ Creation Timestamp: 2024-11-01T09:40:05Z
+ Generation: 1
+ Owner References:
+ API Version: autoscaling.kubedb.com/v1alpha1
+ Block Owner Deletion: true
+ Controller: true
+ Kind: MSSQLServerAutoscaler
+ Name: ms-as-storage
+ UID: 1e239b31-c6c8-4e2c-8cf6-2b95a88b9d45
+ Resource Version: 924068
+ UID: d0dfbe3d-4f0f-43ec-bdff-6d9f3fa96516
+Spec:
+ Apply: IfReady
+ Database Ref:
+ Name: mssqlserver-ag-cluster
+ Type: VolumeExpansion
+ Volume Expansion:
+ Mode: Offline
+ Mssqlserver: 1531054080
+Status:
+ Conditions:
+ Last Transition Time: 2024-11-01T09:40:05Z
+ Message: MSSQLServer ops-request has started to expand volume of mssqlserver nodes.
+ Observed Generation: 1
+ Reason: VolumeExpansion
+ Status: True
+ Type: VolumeExpansion
+ Last Transition Time: 2024-11-01T09:40:13Z
+ Message: get petset; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: GetPetset
+ Last Transition Time: 2024-11-01T09:40:13Z
+ Message: delete petset; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: DeletePetset
+ Last Transition Time: 2024-11-01T09:40:23Z
+ Message: successfully deleted the petSets with orphan propagation policy
+ Observed Generation: 1
+ Reason: OrphanPetSetPods
+ Status: True
+ Type: OrphanPetSetPods
+ Last Transition Time: 2024-11-01T09:46:48Z
+ Message: get pod; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: GetPod
+ Last Transition Time: 2024-11-01T09:40:28Z
+ Message: patch ops request; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: PatchOpsRequest
+ Last Transition Time: 2024-11-01T09:40:28Z
+ Message: delete pod; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: DeletePod
+ Last Transition Time: 2024-11-01T09:41:03Z
+ Message: get pvc; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: GetPvc
+ Last Transition Time: 2024-11-01T09:41:03Z
+ Message: patch pvc; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: PatchPvc
+ Last Transition Time: 2024-11-01T09:48:33Z
+ Message: compare storage; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: CompareStorage
+ Last Transition Time: 2024-11-01T09:42:48Z
+ Message: create pod; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: CreatePod
+ Last Transition Time: 2024-11-01T09:42:53Z
+ Message: running mssql server; ConditionStatus:False
+ Observed Generation: 1
+ Status: False
+ Type: RunningMssqlServer
+ Last Transition Time: 2024-11-01T09:48:58Z
+ Message: successfully updated node PVC sizes
+ Observed Generation: 1
+ Reason: UpdateNodePVCs
+ Status: True
+ Type: UpdateNodePVCs
+ Last Transition Time: 2024-11-01T09:49:03Z
+ Message: successfully reconciled the MSSQLServer resources
+ Observed Generation: 1
+ Reason: UpdatePetSets
+ Status: True
+ Type: UpdatePetSets
+ Last Transition Time: 2024-11-01T09:49:03Z
+ Message: PetSet is recreated
+ Observed Generation: 1
+ Reason: ReadyPetSets
+ Status: True
+ Type: ReadyPetSets
+ Last Transition Time: 2024-11-01T09:49:03Z
+ Message: Successfully completed volumeExpansion for MSSQLServer
+ Observed Generation: 1
+ Reason: Successful
+ Status: True
+ Type: Successful
+ Observed Generation: 1
+ Phase: Successful
+```
+
+Now, we are going to verify from the `Petset`, and the `Persistent Volumes` whether the volume of the database has expanded to meet the desired state, Let's check,
+
+```bash
+$ kubectl get petset -n demo mssqlserver-ag-cluster -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
+"1531054080"
+$ kubectl get pv -n demo
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
+pvc-2ff83356-1bbc-44ab-99f1-025e3690a471 1462Mi RWO Delete Bound demo/data-mssqlserver-ag-cluster-2 longhorn 15m
+pvc-a5cc0ae9-2c8d-456c-ace2-fc4fafc6784f 1462Mi RWO Delete Bound demo/data-mssqlserver-ag-cluster-1 longhorn 16m
+pvc-e8ab47a4-17a6-45fb-9f39-e71a03498ab5 1462Mi RWO Delete Bound demo/data-mssqlserver-ag-cluster-0 longhorn 16m
+```
+
+The above output verifies that we have successfully autoscaled the volume of the MSSQLServer cluster database.
+
+## Cleaning Up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+```bash
+kubectl delete mssqlserver -n demo mssqlserver-ag-cluster
+kubectl delete mssqlserverautoscaler -n demo ms-as-storage
+kubectl delete issuer -n demo mssqlserver-ca-issuer
+kubectl delete secret -n demo mssqlserver-ca
+kubectl delete ns demo
+```
diff --git a/docs/guides/mssqlserver/autoscaler/storage/overview.md b/docs/guides/mssqlserver/autoscaler/storage/overview.md
new file mode 100644
index 0000000000..2e537cb4f9
--- /dev/null
+++ b/docs/guides/mssqlserver/autoscaler/storage/overview.md
@@ -0,0 +1,57 @@
+---
+title: MSSQLServer Storage Autoscaling Overview
+menu:
+ docs_{{ .version }}:
+ identifier: ms-storage-autoscaling-overview
+ name: Overview
+ parent: ms-storage-autoscaling
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# MSSQLServer Storage Autoscaling
+
+This guide will give an overview on how KubeDB `Autoscaler` operator autoscales the database storage using `MSSQLServerAutoscaler` CRD.
+
+## Before You Begin
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [MSSQLServer](/docs/guides/mssqlserver/concepts/mssqlserver.md)
+ - [MSSQLServerOpsRequest](/docs/guides/mssqlserver/concepts/opsrequest.md)
+ - [MSSQLServerAutoscaler](/docs/guides/mssqlserver/concepts/autoscaler.md)
+
+## How Storage Autoscaling Works
+
+The following diagram shows how KubeDB Autoscaler operator autoscales the resources of `MSSQLServer`. Open the image in a new tab to see the enlarged version.
+
+
+
+
+The Auto Scaling process consists of the following steps:
+
+1. At first, a user creates a `MSSQLServer` Custom Resource (CR).
+
+2. `KubeDB` Provisioner operator watches the `MSSQLServer` CR.
+
+3. When the operator finds a `MSSQLServer` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc.
+
+4. Each PetSet creates a Persistent Volumes according to the Volume Claim Template provided in the petset's configuration.
+
+5. Then, in order to set up storage autoscaling of the `MSSQLServer` database the user creates a `MSSQLServerAutoscaler` CRO with desired configuration.
+
+6. `KubeDB` Autoscaler operator watches the `MSSQLServerAutoscaler` CRO.
+
+7. `KubeDB` Autoscaler operator continuously watches persistent volumes of the databases to check if it exceeds the specified usage threshold.
+8. If the usage exceeds the specified usage threshold, then `KubeDB` Autoscaler operator creates a `MSSQLServerOpsRequest` to expand the storage of the database.
+
+9. `KubeDB` Ops-manager operator watches the `MSSQLServerOpsRequest` CRO.
+
+10. Then the `KubeDB` Ops-manager operator will expand the storage of the database as specified on the `MSSQLServerOpsRequest` CRO.
+
+In the next docs, we are going to show a step-by-step guide on Autoscaling storage of MSSQLServer database using `MSSQLServerAutoscaler` CRD.
diff --git a/docs/guides/mssqlserver/backup/application-level/index.md b/docs/guides/mssqlserver/backup/application-level/index.md
index 4bc211b149..b5c9c4f0b3 100644
--- a/docs/guides/mssqlserver/backup/application-level/index.md
+++ b/docs/guides/mssqlserver/backup/application-level/index.md
@@ -115,6 +115,15 @@ spec:
kind: Issuer
apiGroup: "cert-manager.io"
clientTLS: false
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
storage:
accessModes:
- ReadWriteOnce
diff --git a/docs/guides/mssqlserver/backup/auto-backup/examples/sample-mssqlserver-2.yaml b/docs/guides/mssqlserver/backup/auto-backup/examples/sample-mssqlserver-2.yaml
index 08b6ee8e4d..6d55c8cbfb 100644
--- a/docs/guides/mssqlserver/backup/auto-backup/examples/sample-mssqlserver-2.yaml
+++ b/docs/guides/mssqlserver/backup/auto-backup/examples/sample-mssqlserver-2.yaml
@@ -20,18 +20,21 @@ spec:
databases:
- agdb1
- agdb2
- internalAuth:
- endpointCert:
- issuerRef:
- apiGroup: cert-manager.io
- name: mssqlserver-ca-issuer
- kind: Issuer
tls:
issuerRef:
name: mssqlserver-ca-issuer
kind: Issuer
apiGroup: cert-manager.io
clientTLS: false
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
storageType: Durable
storage:
accessModes:
diff --git a/docs/guides/mssqlserver/backup/auto-backup/index.md b/docs/guides/mssqlserver/backup/auto-backup/index.md
index a279524f59..168f4a9145 100644
--- a/docs/guides/mssqlserver/backup/auto-backup/index.md
+++ b/docs/guides/mssqlserver/backup/auto-backup/index.md
@@ -257,6 +257,15 @@ spec:
kind: Issuer
apiGroup: "cert-manager.io"
clientTLS: false
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
storage:
accessModes:
- ReadWriteOnce
@@ -614,18 +623,21 @@ spec:
databases:
- agdb1
- agdb2
- internalAuth:
- endpointCert:
- issuerRef:
- apiGroup: cert-manager.io
- name: mssqlserver-ca-issuer
- kind: Issuer
tls:
issuerRef:
name: mssqlserver-ca-issuer
kind: Issuer
apiGroup: cert-manager.io
clientTLS: false
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
storageType: Durable
storage:
accessModes:
diff --git a/docs/guides/mssqlserver/backup/customization/common/sample-mssqlserver.yaml b/docs/guides/mssqlserver/backup/customization/common/sample-mssqlserver.yaml
index 7d219df8a0..4d472cc70e 100644
--- a/docs/guides/mssqlserver/backup/customization/common/sample-mssqlserver.yaml
+++ b/docs/guides/mssqlserver/backup/customization/common/sample-mssqlserver.yaml
@@ -12,18 +12,21 @@ spec:
databases:
- agdb1
- agdb2
- internalAuth:
- endpointCert:
- issuerRef:
- apiGroup: cert-manager.io
- name: mssqlserver-ca-issuer
- kind: Issuer
tls:
issuerRef:
name: mssqlserver-ca-issuer
kind: Issuer
apiGroup: cert-manager.io
clientTLS: false
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
storageType: Durable
storage:
accessModes:
diff --git a/docs/guides/mssqlserver/backup/logical/index.md b/docs/guides/mssqlserver/backup/logical/index.md
index 8d1d7c5622..c9e124c6a2 100644
--- a/docs/guides/mssqlserver/backup/logical/index.md
+++ b/docs/guides/mssqlserver/backup/logical/index.md
@@ -113,6 +113,15 @@ spec:
kind: Issuer
apiGroup: "cert-manager.io"
clientTLS: false
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
storage:
accessModes:
- ReadWriteOnce
@@ -619,6 +628,15 @@ spec:
kind: Issuer
apiGroup: "cert-manager.io"
clientTLS: false
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
storage:
accessModes:
- ReadWriteOnce
diff --git a/docs/guides/mssqlserver/clustering/ag_cluster.md b/docs/guides/mssqlserver/clustering/ag_cluster.md
index fac342f09c..e12581060a 100644
--- a/docs/guides/mssqlserver/clustering/ag_cluster.md
+++ b/docs/guides/mssqlserver/clustering/ag_cluster.md
@@ -92,6 +92,7 @@ issuer.cert-manager.io/mssqlserver-ca-issuer created
```
### Configuring Environment Variables for SQL Server on Linux
+You can use environment variables to configure SQL Server on Linux containers.
When deploying `Microsoft SQL Server` on Linux using `containers`, you need to specify the `product edition` through the [MSSQL_PID](https://mcr.microsoft.com/en-us/product/mssql/server/about#configuration:~:text=MSSQL_PID%20is%20the,documentation%20here.) environment variable. This variable determines which `SQL Server edition` will run inside the container. The acceptable values for `MSSQL_PID` are:
`Developer`: This will run the container using the Developer Edition (this is the default if no MSSQL_PID environment variable is supplied)
`Express`: This will run the container using the Express Edition
@@ -100,9 +101,11 @@ When deploying `Microsoft SQL Server` on Linux using `containers`, you need to s
`EnterpriseCore`: This will run the container using the Enterprise Edition Core
``: This will run the container with the edition that is associated with the PID
+`ACCEPT_EULA` confirms your acceptance of the [End-User Licensing Agreement](https://go.microsoft.com/fwlink/?linkid=857698).
+
For a complete list of environment variables that can be used, refer to the documentation [here](https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-configure-environment-variables?view=sql-server-2017).
-Below is an example of how to configure the `MSSQL_PID` environment variable in the KubeDB MSSQLServer Custom Resource Definition (CRD):
+Below is an example of how to configure the `MSSQL_PID` and `ACCEPT_EULA` environment variable in the KubeDB MSSQLServer Custom Resource Definition (CRD):
```bash
metadata:
name: mssqlserver
@@ -111,10 +114,12 @@ spec:
podTemplate:
spec:
containers:
- - name: mssql
- env:
- - name: MSSQL_PID
- value: Enterprise
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Enterprise
```
In this example, the SQL Server container will run the Enterprise Edition.
@@ -139,18 +144,21 @@ spec:
databases:
- agdb1
- agdb2
- internalAuth:
- endpointCert:
- issuerRef:
- apiGroup: cert-manager.io
- name: mssqlserver-ca-issuer
- kind: Issuer
tls:
issuerRef:
name: mssqlserver-ca-issuer
kind: Issuer
apiGroup: "cert-manager.io"
clientTLS: false
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
storageType: Durable
storage:
storageClassName: "standard"
@@ -242,33 +250,11 @@ metadata:
spec:
authSecret:
name: mssqlserver-ag-cluster-auth
- coordinator:
- resources: {}
deletionPolicy: WipeOut
healthChecker:
failureThreshold: 1
periodSeconds: 10
timeoutSeconds: 10
- internalAuth:
- endpointCert:
- certificates:
- - alias: endpoint
- secretName: mssqlserver-ag-cluster-endpoint-cert
- subject:
- organizationalUnits:
- - endpoint
- organizations:
- - kubedb
- issuerRef:
- apiGroup: cert-manager.io
- kind: Issuer
- name: mssqlserver-ca-issuer
- leaderElection:
- electionTick: 10
- heartbeatTick: 1
- period: 300ms
- transferLeadershipInterval: 1s
- transferLeadershipTimeout: 1m0s
podTemplate:
controller: {}
metadata: {}
@@ -861,6 +847,6 @@ If you are just testing some basic functionalities, you might want to avoid addi
## Next Steps
- Learn about [backup and restore](/docs/guides/mssqlserver/backup/overview/index.md) SQL Server using KubeStash.
-- Want to set up SQL Server Availability Group clusters? Check how to [Configure SQL Server Availability Gruop Cluster](/docs/guides/mssqlserver/clustering/ag_cluster.md)
-- Detail concepts of [MSSQLServer object](/docs/guides/mssqlserver/concepts/mssqlserver.md).
+- Want to set up SQL Server Availability Group clusters? Check how to [Configure SQL Server Availability Group Cluster](/docs/guides/mssqlserver/clustering/ag_cluster.md)
+- Detail concepts of [MSSQLServer Object](/docs/guides/mssqlserver/concepts/mssqlserver.md).
- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
\ No newline at end of file
diff --git a/docs/guides/mssqlserver/clustering/standalone.md b/docs/guides/mssqlserver/clustering/standalone.md
index 0506eacfe0..daed13436e 100644
--- a/docs/guides/mssqlserver/clustering/standalone.md
+++ b/docs/guides/mssqlserver/clustering/standalone.md
@@ -94,6 +94,7 @@ issuer.cert-manager.io/mssqlserver-ca-issuer created
```
### Configuring Environment Variables for SQL Server on Linux
+You can use environment variables to configure SQL Server on Linux containers.
When deploying `Microsoft SQL Server` on Linux using `containers`, you need to specify the `product edition` through the [MSSQL_PID](https://mcr.microsoft.com/en-us/product/mssql/server/about#configuration:~:text=MSSQL_PID%20is%20the,documentation%20here.) environment variable. This variable determines which `SQL Server edition` will run inside the container. The acceptable values for `MSSQL_PID` are:
`Developer`: This will run the container using the Developer Edition (this is the default if no MSSQL_PID environment variable is supplied)
`Express`: This will run the container using the Express Edition
@@ -102,9 +103,11 @@ When deploying `Microsoft SQL Server` on Linux using `containers`, you need to s
`EnterpriseCore`: This will run the container using the Enterprise Edition Core
``: This will run the container with the edition that is associated with the PID
+`ACCEPT_EULA` confirms your acceptance of the [End-User Licensing Agreement](https://go.microsoft.com/fwlink/?linkid=857698).
+
For a complete list of environment variables that can be used, refer to the documentation [here](https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-configure-environment-variables?view=sql-server-2017).
-Below is an example of how to configure the `MSSQL_PID` environment variable in the KubeDB MSSQLServer Custom Resource Definition (CRD):
+Below is an example of how to configure the `MSSQL_PID` and `ACCEPT_EULA` environment variable in the KubeDB MSSQLServer Custom Resource Definition (CRD):
```bash
metadata:
name: mssqlserver
@@ -113,10 +116,12 @@ spec:
podTemplate:
spec:
containers:
- - name: mssql
- env:
- - name: MSSQL_PID
- value: Enterprise
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Enterprise
```
In this example, the SQL Server container will run the Enterprise Edition.
@@ -142,6 +147,15 @@ spec:
kind: Issuer
apiGroup: "cert-manager.io"
clientTLS: false
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
storage:
storageClassName: "standard"
accessModes:
@@ -216,8 +230,6 @@ metadata:
spec:
authSecret:
name: mssqlserver-standalone-auth
- coordinator:
- resources: {}
deletionPolicy: WipeOut
healthChecker:
failureThreshold: 1
diff --git a/docs/guides/mssqlserver/concepts/autoscaler.md b/docs/guides/mssqlserver/concepts/autoscaler.md
new file mode 100644
index 0000000000..60294932bc
--- /dev/null
+++ b/docs/guides/mssqlserver/concepts/autoscaler.md
@@ -0,0 +1,108 @@
+---
+title: MSSQLServerAutoscaler CRD
+menu:
+ docs_{{ .version }}:
+ identifier: ms-concepts-autoscaler
+ name: MSSQLServerAutoscaler
+ parent: ms-concepts
+ weight: 30
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# MSSQLServerAutoscaler
+
+## What is MSSQLServerAutoscaler
+
+`MSSQLServerAutoscaler` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for autoscaling [Microsoft SQL Server](https://learn.microsoft.com/en-us/sql/sql-server/) compute resources and storage of database in a Kubernetes native way.
+
+## MSSQLServerAutoscaler CRD Specifications
+
+Like any official Kubernetes resource, a `MSSQLServerAutoscaler` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections.
+
+Here is a sample `MSSQLServerAutoscaler` CRO for autoscaling is given below:
+
+**Sample `MSSQLServerAutoscaler` for mssqlserver database:**
+
+```yaml
+apiVersion: autoscaling.kubedb.com/v1alpha1
+kind: MSSQLServerAutoscaler
+metadata:
+ name: standalone-autoscaler
+ namespace: demo
+spec:
+ databaseRef:
+ name: mssqlserver-standalone
+ opsRequestOptions:
+ apply: IfReady
+ timeout: 5m
+ compute:
+ mssqlserver:
+ trigger: "On"
+ podLifeTimeThreshold: 5m
+ minAllowed:
+ cpu: 800m
+ memory: 2Gi
+ maxAllowed:
+ cpu: 2
+ memory: 4Gi
+ controlledResources: ["cpu", "memory"]
+ containerControlledValues: "RequestsAndLimits"
+ resourceDiffPercentage: 10
+ storage:
+ mssqlserver:
+ expansionMode: "Online"
+ trigger: "On"
+ usageThreshold: 60
+ scalingThreshold: 50
+```
+
+Here, we are going to describe the various sections of a `MSSQLServerAutoscaler` CRD.
+
+A `MSSQLServerAutoscaler` object has the following fields in the `spec` section.
+
+### spec.databaseRef
+
+`spec.databaseRef` is a required field that point to the [MSSQLServer](/docs/guides/mssqlserver/concepts/mssqlserver.md) object for which the autoscaling will be performed. This field consists of the following sub-field:
+
+- **spec.databaseRef.name :** specifies the name of the [MSSQLServer](/docs/guides/mssqlserver/concepts/mssqlserver.md) object.
+
+### spec.opsRequestOptions
+These are the options to pass in the internally created opsRequest CRO. `opsRequestOptions` has two fields. They have been described in details [here](/docs/guides/mssqlserver/concepts/opsrequest.md#spectimeout).
+
+### spec.compute
+
+`spec.compute` specifies the autoscaling configuration for to compute resources i.e. cpu and memory of the database. This field consists of the following sub-field:
+
+- `spec.compute.mssqlserver` indicates the desired compute autoscaling configuration for a MSSQLServer database.
+
+This has the following sub-fields:
+
+- `trigger` indicates if compute autoscaling is enabled for the database. If "On" then compute autoscaling is enabled. If "Off" then compute autoscaling is disabled.
+- `minAllowed` specifies the minimal amount of resources that will be recommended, default is no minimum.
+- `maxAllowed` specifies the maximum amount of resources that will be recommended, default is no maximum.
+- `controlledResources` specifies which type of compute resources (cpu and memory) are allowed for autoscaling. Allowed values are "cpu" and "memory".
+- `containerControlledValues` specifies which resource values should be controlled. Allowed values are "RequestsAndLimits" and "RequestsOnly".
+- `resourceDiffPercentage` specifies the minimum resource difference between recommended value and the current value in percentage. If the difference percentage is greater than this value than autoscaling will be triggered.
+- `podLifeTimeThreshold` specifies the minimum pod lifetime of at least one of the pods before triggering autoscaling.
+
+### spec.storage
+
+`spec.storage` specifies the autoscaling configuration for the storage resources of the database. This field consists of the following sub-field:
+
+- `spec.storage.mssqlserver` indicates the desired storage autoscaling configuration for a MSSQLServer database.
+
+ It has the following sub-fields:
+
+- `trigger` indicates if storage autoscaling is enabled for the database. If "On" then storage autoscaling is enabled. If "Off" then storage autoscaling is disabled.
+- `usageThreshold` indicates usage percentage threshold, if the current storage usage exceeds then storage autoscaling will be triggered.
+- `scalingThreshold` indicates the percentage of the current storage that will be scaled.
+- `expansionMode` indicates the volume expansion mode.
+
+## Next Steps
+
+- Learn about [backup and restore](/docs/guides/mssqlserver/backup/overview/index.md) SQL Server using KubeStash.
+- Learn about MSSQLServer CRD [here](/docs/guides/mssqlserver/concepts/mssqlserver.md).
+- Deploy your first MSSQLServer database with MSSQLServer by following the guide [here](/docs/guides/mssqlserver/quickstart/quickstart.md).
diff --git a/docs/guides/mssqlserver/concepts/mssqlserver.md b/docs/guides/mssqlserver/concepts/mssqlserver.md
index 0b8d9beddd..549b569087 100644
--- a/docs/guides/mssqlserver/concepts/mssqlserver.md
+++ b/docs/guides/mssqlserver/concepts/mssqlserver.md
@@ -40,27 +40,13 @@ spec:
databases:
- agdb1
- agdb2
+ leaderElection:
+ electionTick: 10
+ heartbeatTick: 1
+ period: 300ms
+ transferLeadershipInterval: 1s
+ transferLeadershipTimeout: 1m0s
mode: AvailabilityGroup
- internalAuth:
- endpointCert:
- certificates:
- - alias: endpoint
- secretName: mssqlserver-endpoint-cert
- subject:
- organizationalUnits:
- - endpoint
- organizations:
- - kubedb
- issuerRef:
- apiGroup: cert-manager.io
- kind: Issuer
- name: mssqlserver-ca-issuer
- leaderElection:
- electionTick: 10
- heartbeatTick: 1
- period: 300ms
- transferLeadershipInterval: 1s
- transferLeadershipTimeout: 1m0s
podTemplate:
metadata:
annotations:
@@ -151,23 +137,30 @@ spec:
tls:
certificates:
- alias: server
+ emailAddresses:
+ - dev@appscode.com
secretName: mssqlserver-server-cert
subject:
organizationalUnits:
- server
organizations:
- kubedb
- emailAddresses:
- - dev@appscode.com
- alias: client
+ emailAddresses:
+ - abc@appscode.com
secretName: mssqlserver-client-cert
subject:
organizationalUnits:
- client
organizations:
- kubedb
- emailAddresses:
- - abc@appscode.com
+ - alias: endpoint
+ secretName: mssqlserver-endpoint-cert
+ subject:
+ organizationalUnits:
+ - endpoint
+ organizations:
+ - kubedb
clientTLS: true
issuerRef:
apiGroup: cert-manager.io
@@ -491,6 +484,7 @@ If you don't specify `spec.deletionPolicy` KubeDB uses `Delete` termination poli
Indicates that the database is halted and all offshoot Kubernetes resources except PVCs are deleted.
### Configuring Environment Variables for SQL Server on Linux
+You can use environment variables to configure SQL Server on Linux containers.
When deploying `Microsoft SQL Server` on Linux using `containers`, you need to specify the `product edition` through the [MSSQL_PID](https://mcr.microsoft.com/en-us/product/mssql/server/about#configuration:~:text=MSSQL_PID%20is%20the,documentation%20here.) environment variable. This variable determines which `SQL Server edition` will run inside the container. The acceptable values for `MSSQL_PID` are:
`Developer`: This will run the container using the Developer Edition (this is the default if no MSSQL_PID environment variable is supplied)
`Express`: This will run the container using the Express Edition
@@ -499,9 +493,11 @@ When deploying `Microsoft SQL Server` on Linux using `containers`, you need to s
`EnterpriseCore`: This will run the container using the Enterprise Edition Core
``: This will run the container with the edition that is associated with the PID
+`ACCEPT_EULA` confirms your acceptance of the [End-User Licensing Agreement](https://go.microsoft.com/fwlink/?linkid=857698).
+
For a complete list of environment variables that can be used, refer to the documentation [here](https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-configure-environment-variables?view=sql-server-2017).
-Below is an example of how to configure the `MSSQL_PID` environment variable in the KubeDB MSSQLServer Custom Resource Definition (CRD):
+Below is an example of how to configure the `MSSQL_PID` and `ACCEPT_EULA` environment variable in the KubeDB MSSQLServer Custom Resource Definition (CRD):
```bash
metadata:
name: mssqlserver
@@ -510,10 +506,12 @@ spec:
podTemplate:
spec:
containers:
- - name: mssql
- env:
- - name: MSSQL_PID
- value: Enterprise
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Enterprise
```
In this example, the SQL Server container will run the Enterprise Edition.
diff --git a/docs/guides/mssqlserver/concepts/opsrequest.md b/docs/guides/mssqlserver/concepts/opsrequest.md
new file mode 100644
index 0000000000..cb7609e915
--- /dev/null
+++ b/docs/guides/mssqlserver/concepts/opsrequest.md
@@ -0,0 +1,271 @@
+---
+title: MSSQLServerOpsRequest CRD
+menu:
+ docs_{{ .version }}:
+ identifier: ms-concepts-ops-request
+ name: MSSQLServerOpsRequest
+ parent: ms-concepts
+ weight: 25
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# MSSQLServerOpsRequest
+
+## What is MSSQLServerOpsRequest
+
+`MSSQLServerOpsRequest` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [Microsoft SQL Server](https://learn.microsoft.com/en-us/sql/sql-server/) administrative operations like database version updating, horizontal scaling, vertical scaling, reconfigure, volume expansion, etc. in a Kubernetes native way.
+
+## MSSQLServerOpsRequest CRD Specifications
+
+Like any official Kubernetes resource, a `MSSQLServerOpsRequest` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections.
+
+Here, some sample `MSSQLServerOpsRequest` CR for different administrative operations are given below,
+
+Sample `MSSQLServerOpsRequest` for updating database:
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: MSSQLServerOpsRequest
+metadata:
+ name: update-ms-version
+ namespace: demo
+spec:
+ type: UpdateVersion
+ databaseRef:
+ name: mssql-ag
+ updateVersion:
+ targetVersion: 2022-cu14
+status:
+ conditions:
+ - lastTransitionTime: "2020-06-11T09:59:05Z"
+ message: The controller has scaled/updated the MSSQLServer successfully
+ observedGeneration: 3
+ reason: OpsRequestSuccessful
+ status: "True"
+ type: Successful
+ observedGeneration: 3
+ phase: Successful
+```
+
+Sample `MSSQLServerOpsRequest` for horizontal scaling:
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: MSSQLServerOpsRequest
+metadata:
+ name: msops-horizontal-up
+ namespace: demo
+spec:
+ type: HorizontalScaling
+ databaseRef:
+ name: mssql-ag
+ horizontalScaling:
+ replicas: 5
+status:
+ conditions:
+ - lastTransitionTime: "2020-06-11T09:59:05Z"
+ message: The controller has scaled/updated the MSSQLServer successfully
+ observedGeneration: 3
+ reason: OpsRequestSuccessful
+ status: "True"
+ type: Successful
+ observedGeneration: 3
+ phase: Successful
+```
+
+Sample `MSSQLServerOpsRequest` for vertical scaling:
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: MSSQLServerOpsRequest
+metadata:
+ name: mops-vscale-standalone
+ namespace: demo
+spec:
+ type: VerticalScaling
+ databaseRef:
+ name: mssql-standalone
+ verticalScaling:
+ mssqlserver:
+ resources:
+ requests:
+ memory: "5Gi"
+ cpu: "1000m"
+ limits:
+ memory: "5Gi"
+status:
+ conditions:
+ - lastTransitionTime: "2020-06-11T09:59:05Z"
+ message: The controller has scaled/updated the MSSQLServer successfully
+ observedGeneration: 3
+ reason: OpsRequestSuccessful
+ status: "True"
+ type: Successful
+ observedGeneration: 3
+ phase: Successful
+```
+
+Here, we are going to describe the various sections of a `MSSQLServerOpsRequest` CR.
+
+### MSSQLServerOpsRequest `Spec`
+
+A `MSSQLServerOpsRequest` object has the following fields in the `spec` section.
+
+#### spec.databaseRef
+
+`spec.databaseRef` is a required field that point to the [MSSQLServer](/docs/guides/mssqlserver/concepts/mssqlserver.md) object where the administrative operations will be applied. This field consists of the following sub-field:
+
+- **spec.databaseRef.name :** specifies the name of the [MSSQLServer](/docs/guides/mssqlserver/concepts/mssqlserver.md) object.
+
+#### spec.type
+
+`spec.type` specifies the kind of operation that will be applied to the database. Currently, the following types of operations are allowed in `MSSQLServerOpsRequest`.
+
+- `UpdateVersion`
+- `HorizontalScaling`
+- `VerticalScaling`
+- `VolumeExpansion`
+- `Restart`
+
+>You can perform only one type of operation on a single `MSSQLServerOpsRequest` CR. For example, if you want to update your database and scale up its replica then you have to create two separate `MSSQLServerOpsRequest`. At first, you have to create a `MSSQLServerOpsRequest` for updating. Once it is completed, then you can create another `MSSQLServerOpsRequest` for scaling. You should not create two `MSSQLServerOpsRequest` simultaneously.
+
+#### spec.updateVersion
+
+If you want to update your MSSQLServer version, you have to specify the `spec.updateVersion` section that specifies the desired version information. This field consists of the following sub-field:
+
+- `spec.updateVersion.targetVersion` refers to a [MSSQLServerVersion](/docs/guides/mssqlserver/concepts/catalog.md) CR that contains the MSSQLServer version information where you want to update.
+
+>You can only update between MSSQLServer versions. KubeDB does not support downgrade for MSSQLServer.
+
+#### spec.horizontalScaling
+
+If you want to scale-up or scale-down your MSSQLServer cluster, you have to specify `spec.horizontalScaling` section. This field consists of the following sub-field:
+
+- `spec.horizontalScaling.replicas` indicates the desired number of replicas for your MSSQLServer cluster after scaling. For example, if your cluster currently has 4 replicas, and you want to add additional 2 replicas then you have to specify 6 in `spec.horizontalScaling.replicas` field. Similarly, if you want to remove one member from the cluster, you have to specify 3 in `spec.horizontalScaling.replicas` field.
+
+#### spec.verticalScaling
+
+`spec.verticalScaling` is a required field specifying the information of `MSSQLServer` resources like `cpu`, `memory` etc. that will be scaled. This field consists of the following sub-fields:
+
+- `spec.verticalScaling.mssqlserver` indicates the `MSSQLServer` server resources. It has the below structure:
+
+```yaml
+resources:
+ requests:
+ memory: "5Gi"
+ cpu: 1
+ limits:
+ memory: "5Gi"
+ cpu: 2
+```
+
+Here, when you specify the resource request for `MSSQLServer` container, the scheduler uses this information to decide which node to place the container of the Pod on and when you specify a resource limit for `MSSQLServer` container, the `kubelet` enforces those limits so that the running container is not allowed to use more of that resource than the limit you set. you can found more details from [here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/)
+
+- `spec.verticalScaling.exporter` indicates the `exporter` container resources. It has the same structure as `spec.verticalScaling.mssqlserver` and you can scale the resource the same way as `MSSQLServer` container.
+
+>You can increase/decrease resources for both `MSSQLServer` container and `exporter` container on a single `MSSQLServerOpsRequest` CR.
+
+### spec.volumeExpansion
+
+> To use the volume expansion feature the storage class must support volume expansion
+
+If you want to expand the volume of your MSSQLServer cluster, you have to specify `spec.volumeExpansion` section. This field consists of the following sub-fields:
+
+- `spec.volumeExpansion.mode` specifies the volume expansion mode. Supported values are `Online` & `Offline`. The default is `Online`.
+- `spec.volumeExpansion.mssqlserver` indicates the desired size for the persistent volumes of a MSSQLServer.
+
+All of them refer to [Quantity](https://v1-22.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#quantity-resource-core) types of Kubernetes.
+
+Example usage of this field is given below:
+
+```yaml
+spec:
+ volumeExpansion:
+ mode: "Online"
+ mssqlserver: 30Gi
+```
+
+This will expand the volume size of all the mssql server nodes to 30 GB.
+
+
+#### spec.timeout
+
+As we internally retry the ops request steps multiple times, This `timeout` field helps the users to specify the timeout for those steps of the ops request (in second). If a step doesn’t finish within the specified timeout, the ops request will result in failure.
+
+
+#### spec.apply
+
+This field controls the execution of obsRequest depending on the database state. It has two supported values: `Always` & `IfReady`. Use IfReady, if you want to process the opsRequest only when the database is Ready. And use Always, if you want to process the execution of opsReq irrespective of the Database state.
+
+### MSSQLServerOpsRequest `Status`
+
+`.status` describes the current state and progress of the `MSSQLServerOpsRequest` operation. It has the following fields:
+
+### status.phase
+
+`status.phase` indicates the overall phase of the operation for this `MSSQLServerOpsRequest`. It can have the following values:
+
+| Phase | Meaning |
+|-------------|----------------------------------------------------------------------------------------|
+| Successful | KubeDB has successfully performed the operation requested in the MSSQLServerOpsRequest |
+| Progressing | KubeDB has started the execution of the applied MSSQLServerOpsRequest |
+| Failed | KubeDB has failed the operation requested in the MSSQLServerOpsRequest |
+| Denied | KubeDB has denied the operation requested in the MSSQLServerOpsRequest |
+| Skipped | KubeDB has skipped the operation requested in the MSSQLServerOpsRequest |
+
+Important: Ops-manager Operator can skip an opsRequest, only if its execution has not been started yet & there is a newer opsRequest applied in the cluster. `spec.type` has to be same as the skipped one, in this case.
+
+#### status.observedGeneration
+
+`status.observedGeneration` shows the most recent generation observed by the `MSSQLServerOpsRequest` controller.
+
+#### status.conditions
+
+`status.conditions` is an array that specifies the conditions of different steps of `MSSQLServerOpsRequest` processing. Each condition entry has the following fields:
+
+- `types` specifies the type of the condition. MSSQLServerOpsRequest has the following types of conditions:
+
+| Type | Meaning |
+|---------------------|---------------------------------------------------------------------------------------------|
+| `Progressing` | Specifies that the operation is now progressing |
+| `Successful` | Specifies such a state that the operation on the database has been successful. |
+| `HaltDatabase` | Specifies such a state that the database is halted by the operator |
+| `ResumeDatabase` | Specifies such a state that the database is resumed by the operator |
+| `Failed` | Specifies such a state that the operation on the database has been failed. |
+| `Scaling` | Specifies such a state that the scaling operation on the database has started |
+| `VerticalScaling` | Specifies such a state that vertical scaling has performed successfully on database |
+| `HorizontalScaling` | Specifies such a state that horizontal scaling has performed successfully on database |
+| `Updating` | Specifies such a state that database updating operation has started |
+| `UpdateVersion` | Specifies such a state that version updating on the database have performed successfully |
+
+- The `status` field is a string, with possible values `"True"`, `"False"`, and `"Unknown"`.
+ - `status` will be `"True"` if the current transition is succeeded.
+ - `status` will be `"False"` if the current transition is failed.
+ - `status` will be `"Unknown"` if the current transition is denied.
+- The `message` field is a human-readable message indicating details about the condition.
+- The `reason` field is a unique, one-word, CamelCase reason for the condition's last transition. It has the following possible values:
+
+| Reason | Meaning |
+|------------------------------------------| -------------------------------------------------------------------------------- |
+| `OpsRequestProgressingStarted` | Operator has started the OpsRequest processing |
+| `OpsRequestFailedToProgressing` | Operator has failed to start the OpsRequest processing |
+| `SuccessfullyHaltedDatabase` | Database is successfully halted by the operator |
+| `FailedToHaltDatabase` | Database is failed to halt by the operator |
+| `SuccessfullyResumedDatabase` | Database is successfully resumed to perform its usual operation |
+| `FailedToResumedDatabase` | Database is failed to resume |
+| `DatabaseVersionUpdatingStarted` | Operator has started updating the database version |
+| `SuccessfullyUpdatedDatabaseVersion` | Operator has successfully updated the database version |
+| `FailedToUpdateDatabaseVersion` | Operator has failed to update the database version |
+| `HorizontalScalingStarted` | Operator has started the horizontal scaling |
+| `SuccessfullyPerformedHorizontalScaling` | Operator has successfully performed on horizontal scaling |
+| `FailedToPerformHorizontalScaling` | Operator has failed to perform on horizontal scaling |
+| `VerticalScalingStarted` | Operator has started the vertical scaling |
+| `SuccessfullyPerformedVerticalScaling` | Operator has successfully performed on vertical scaling |
+| `FailedToPerformVerticalScaling` | Operator has failed to perform on vertical scaling |
+| `OpsRequestProcessedSuccessfully` | Operator has completed the operation successfully requested by the OpeRequest cr |
+
+- The `lastTransitionTime` field provides a timestamp for when the operation last transitioned from one state to another.
+- The `observedGeneration` shows the most recent condition transition generation observed by the controller.
diff --git a/docs/guides/mssqlserver/configuration/using-config-file.md b/docs/guides/mssqlserver/configuration/using-config-file.md
index 42e5b58327..e4d0158891 100644
--- a/docs/guides/mssqlserver/configuration/using-config-file.md
+++ b/docs/guides/mssqlserver/configuration/using-config-file.md
@@ -153,6 +153,15 @@ spec:
kind: Issuer
apiGroup: "cert-manager.io"
clientTLS: false
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
storageType: Durable
storage:
storageClassName: "standard"
diff --git a/docs/guides/mssqlserver/configuration/using-podtemplate.md b/docs/guides/mssqlserver/configuration/using-podtemplate.md
index 6e57a99f91..851bad2fea 100644
--- a/docs/guides/mssqlserver/configuration/using-podtemplate.md
+++ b/docs/guides/mssqlserver/configuration/using-podtemplate.md
@@ -141,6 +141,8 @@ spec:
containers:
- name: mssql
env:
+ - name: ACCEPT_EULA
+ value: "Y"
- name: MSSQL_PID
value: "Evaluation"
- name: MSSQL_MEMORY_LIMIT_MB
diff --git a/docs/guides/mssqlserver/monitoring/_index.md b/docs/guides/mssqlserver/monitoring/_index.md
new file mode 100755
index 0000000000..a7c806c4cc
--- /dev/null
+++ b/docs/guides/mssqlserver/monitoring/_index.md
@@ -0,0 +1,10 @@
+---
+title: Monitoring Microsoft SQL Server
+menu:
+ docs_{{ .version }}:
+ identifier: ms-monitoring
+ name: Monitoring
+ parent: guides-mssqlserver
+ weight: 50
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/mssqlserver/monitoring/overview.md b/docs/guides/mssqlserver/monitoring/overview.md
new file mode 100644
index 0000000000..47540c2e24
--- /dev/null
+++ b/docs/guides/mssqlserver/monitoring/overview.md
@@ -0,0 +1,104 @@
+---
+title: MSSQLServer Monitoring Overview
+description: MSSQLServer Monitoring Overview
+menu:
+ docs_{{ .version }}:
+ identifier: ms-monitoring-overview
+ name: Overview
+ parent: ms-monitoring
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Monitoring MSSQLServer with KubeDB
+
+KubeDB has native support for monitoring via [Prometheus](https://prometheus.io/). You can use builtin [Prometheus](https://github.com/prometheus/prometheus) scraper or [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) to monitor KubeDB managed databases. This tutorial will show you how database monitoring works with KubeDB and how to configure Database CR to enable monitoring.
+
+## Overview
+
+KubeDB uses Prometheus [exporter](https://prometheus.io/docs/instrumenting/exporters/#databases) images to export Prometheus metrics for respective databases. Following diagram shows the logical flow of database monitoring with KubeDB.
+
+
+
+
+
+When a user creates a database CR with `spec.monitor` section configured, KubeDB operator provisions the respective database and injects an exporter image as sidecar to the database pod. It also creates a dedicated stats service with name `{database-crd-name}-stats` for monitoring. Prometheus server can scrape metrics using this stats service.
+
+## Configure Monitoring
+
+In order to enable monitoring for a database, you have to configure `spec.monitor` section. KubeDB provides following options to configure `spec.monitor` section:
+
+| Field | Type | Uses |
+| -------------------------------------------------- | ---------- | --------------------------------------------------------------------------------------------------------------------------------------------- |
+| `spec.monitor.agent` | `Required` | Type of the monitoring agent that will be used to monitor this database. It can be `prometheus.io/builtin` or `prometheus.io/operator`. |
+| `spec.monitor.prometheus.exporter.port` | `Optional` | Port number where the exporter side car will serve metrics. |
+| `spec.monitor.prometheus.exporter.args` | `Optional` | Arguments to pass to the exporter sidecar. |
+| `spec.monitor.prometheus.exporter.env` | `Optional` | List of environment variables to set in the exporter sidecar container. |
+| `spec.monitor.prometheus.exporter.resources` | `Optional` | Resources required by exporter sidecar container. |
+| `spec.monitor.prometheus.exporter.securityContext` | `Optional` | Security options the exporter should run with. |
+| `spec.monitor.prometheus.serviceMonitor.labels` | `Optional` | Labels for `ServiceMonitor` CR. |
+| `spec.monitor.prometheus.serviceMonitor.interval` | `Optional` | Interval at which metrics should be scraped. |
+
+## Sample Configuration
+
+A sample YAML for MSSQLServer CR with `spec.monitor` section configured to enable monitoring with [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) is shown below.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: MSSQLServer
+metadata:
+ name: mssql-monitoring
+ namespace: demo
+spec:
+ version: "2022-cu12"
+ replicas: 1
+ tls:
+ issuerRef:
+ name: mssqlserver-ca-issuer
+ kind: Issuer
+ apiGroup: "cert-manager.io"
+ clientTLS: false
+ monitor:
+ agent: prometheus.io/operator
+ prometheus:
+ exporter:
+ port: 9399
+ resources:
+ limits:
+ memory: 512Mi
+ requests:
+ cpu: 200m
+ memory: 256Mi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ runAsGroup: 10001
+ runAsNonRoot: true
+ runAsUser: 10001
+ seccompProfile:
+ type: RuntimeDefault
+ serviceMonitor:
+ interval: 10s
+ labels:
+ release: prometheus
+ storageType: Durable
+ storage:
+ storageClassName: standard
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ deletionPolicy: WipeOut
+```
+
+Here, we have specified that we are going to monitor this server using Prometheus operator through `spec.monitor.agent: prometheus.io/operator`. KubeDB will create a `ServiceMonitor` CR in `monitoring` namespace and this `ServiceMonitor` will have `release: prometheus` label.
+
+## Next Steps
+
+- Learn how to monitor Microsoft SQL Server with KubeDB using [Prometheus operator](/docs/guides/mssqlserver/monitoring/using-prometheus-operator.md).
diff --git a/docs/guides/mssqlserver/monitoring/using-prometheus-operator.md b/docs/guides/mssqlserver/monitoring/using-prometheus-operator.md
new file mode 100644
index 0000000000..a11033e690
--- /dev/null
+++ b/docs/guides/mssqlserver/monitoring/using-prometheus-operator.md
@@ -0,0 +1,448 @@
+---
+title: Monitor SQL Server using Prometheus Operator
+menu:
+ docs_{{ .version }}:
+ identifier: ms-monitoring-prometheus-operator
+ name: Prometheus Operator
+ parent: ms-monitoring
+ weight: 15
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Monitoring MSSQLServer Using Prometheus operator
+
+[Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) provides simple and Kubernetes native way to deploy and configure Prometheus server. This tutorial will show you how to use Prometheus operator to monitor MSSQLServer deployed with KubeDB.
+
+## Before You Begin
+
+- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). Make sure install with helm command including `--set global.featureGates.MSSQLServer=true` to ensure MSSQLServer CRD installation.
+
+- To configure TLS/SSL in `MSSQLServer`, `KubeDB` uses `cert-manager` to issue certificates. So first you have to make sure that the cluster has `cert-manager` installed. To install `cert-manager` in your cluster following steps [here](https://cert-manager.io/docs/installation/kubernetes/).
+
+- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/guides/mssqlserver/monitoring/overview.md).
+
+- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy respective monitoring resources. We are going to deploy database in `demo` namespace.
+
+ ```bash
+ $ kubectl create ns monitoring
+ namespace/monitoring created
+
+ $ kubectl create ns demo
+ namespace/demo created
+ ```
+
+- We need a [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) instance running. If you don't already have a running instance, you can deploy one using this helm chart [here](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack).
+
+
+> Note: YAML files used in this tutorial are stored in [docs/examples/mssqlserver/monitoring](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mssqlserver/monitoring) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
+
+## Find out required labels for ServiceMonitor
+
+We need to know the labels used to select `ServiceMonitor` by `Prometheus` Operator. We are going to provide these labels in `spec.monitor.prometheus.labels` field of MSSQLServer CR so that KubeDB creates `ServiceMonitor` object accordingly.
+
+At first, let's find out the available Prometheus server in our cluster.
+
+```bash
+$ kubectl get prometheus --all-namespaces
+NAMESPACE NAME VERSION DESIRED READY RECONCILED AVAILABLE AGE
+monitoring prometheus-kube-prometheus-prometheus v2.54.1 1 1 True True 16d
+```
+
+> If you don't have any Prometheus server running in your cluster, deploy one following the guide specified in **Before You Begin** section.
+
+Now, let's view the YAML of the available Prometheus server `prometheus-kube-prometheus-prometheus` in `monitoring` namespace.
+
+```bash
+$ kubectl get prometheus -n monitoring prometheus-kube-prometheus-prometheus -oyaml
+```
+```yaml
+apiVersion: monitoring.coreos.com/v1
+kind: Prometheus
+metadata:
+ annotations:
+ meta.helm.sh/release-name: prometheus
+ meta.helm.sh/release-namespace: monitoring
+ creationTimestamp: "2024-10-14T10:14:36Z"
+ generation: 1
+ labels:
+ app: kube-prometheus-stack-prometheus
+ app.kubernetes.io/instance: prometheus
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/part-of: kube-prometheus-stack
+ app.kubernetes.io/version: 65.2.0
+ chart: kube-prometheus-stack-65.2.0
+ heritage: Helm
+ release: prometheus
+ name: prometheus-kube-prometheus-prometheus
+ namespace: monitoring
+ resourceVersion: "1004097"
+ uid: b7879d3e-e4bb-4425-8d78-f917561d95f7
+spec:
+ alerting:
+ alertmanagers:
+ - apiVersion: v2
+ name: prometheus-kube-prometheus-alertmanager
+ namespace: monitoring
+ pathPrefix: /
+ port: http-web
+ automountServiceAccountToken: true
+ enableAdminAPI: false
+ evaluationInterval: 30s
+ externalUrl: http://prometheus-kube-prometheus-prometheus.monitoring:9090
+ hostNetwork: false
+ image: quay.io/prometheus/prometheus:v2.54.1
+ listenLocal: false
+ logFormat: logfmt
+ logLevel: info
+ paused: false
+ podMonitorNamespaceSelector: {}
+ podMonitorSelector:
+ matchLabels:
+ release: prometheus
+ portName: http-web
+ probeNamespaceSelector: {}
+ probeSelector:
+ matchLabels:
+ release: prometheus
+ replicas: 1
+ retention: 10d
+ routePrefix: /
+ ruleNamespaceSelector: {}
+ ruleSelector:
+ matchLabels:
+ release: prometheus
+ scrapeConfigNamespaceSelector: {}
+ scrapeConfigSelector:
+ matchLabels:
+ release: prometheus
+ scrapeInterval: 30s
+ securityContext:
+ fsGroup: 2000
+ runAsGroup: 2000
+ runAsNonRoot: true
+ runAsUser: 1000
+ seccompProfile:
+ type: RuntimeDefault
+ serviceAccountName: prometheus-kube-prometheus-prometheus
+ serviceMonitorNamespaceSelector: {}
+ serviceMonitorSelector:
+ matchLabels:
+ release: prometheus
+ shards: 1
+ tsdb:
+ outOfOrderTimeWindow: 0s
+ version: v2.54.1
+ walCompression: true
+status:
+ availableReplicas: 1
+ conditions:
+ - lastTransitionTime: "2024-10-31T07:38:36Z"
+ message: ""
+ observedGeneration: 1
+ reason: ""
+ status: "True"
+ type: Available
+ - lastTransitionTime: "2024-10-31T07:38:36Z"
+ message: ""
+ observedGeneration: 1
+ reason: ""
+ status: "True"
+ type: Reconciled
+ paused: false
+ replicas: 1
+ selector: app.kubernetes.io/instance=prometheus-kube-prometheus-prometheus,app.kubernetes.io/managed-by=prometheus-operator,app.kubernetes.io/name=prometheus,operator.prometheus.io/name=prometheus-kube-prometheus-prometheus,prometheus=prometheus-kube-prometheus-prometheus
+ shardStatuses:
+ - availableReplicas: 1
+ replicas: 1
+ shardID: "0"
+ unavailableReplicas: 0
+ updatedReplicas: 1
+ shards: 1
+ unavailableReplicas: 0
+ updatedReplicas: 1
+```
+
+Notice the `spec.serviceMonitorSelector` section. Here, `release: prometheus` label is used to select `ServiceMonitor` CR. So, we are going to use this label in `spec.monitor.prometheus.labels` field of MSSQLServer CR.
+
+## Deploy MSSQLServer with Monitoring Enabled
+
+First, an issuer needs to be created, even if TLS is not enabled for SQL Server. The issuer will be used to configure the TLS-enabled Wal-G proxy server, which is required for the SQL Server backup and restore operations.
+
+### Create Issuer/ClusterIssuer
+
+Now, we are going to create an example `Issuer` that will be used throughout the duration of this tutorial. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. By following the below steps, we are going to create our desired issuer,
+
+- Start off by generating our ca-certificates using openssl,
+```bash
+openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=MSSQLServer/O=kubedb"
+```
+- Create a secret using the certificate files we have just generated,
+```bash
+$ kubectl create secret tls mssqlserver-ca --cert=ca.crt --key=ca.key --namespace=demo
+secret/mssqlserver-ca created
+```
+Now, we are going to create an `Issuer` using the `mssqlserver-ca` secret that contains the ca-certificate we have just created. Below is the YAML of the `Issuer` CR that we are going to create,
+
+```yaml
+apiVersion: cert-manager.io/v1
+kind: Issuer
+metadata:
+ name: mssqlserver-ca-issuer
+ namespace: demo
+spec:
+ ca:
+ secretName: mssqlserver-ca
+```
+
+Let’s create the `Issuer` CR we have shown above,
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mssqlserver/ag-cluster/mssqlserver-ca-issuer.yaml
+issuer.cert-manager.io/mssqlserver-ca-issuer created
+```
+
+Now, let's deploy an MSSQLServer with monitoring enabled. Below is the MSSQLServer object that we are going to create.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: MSSQLServer
+metadata:
+ name: mssql-monitoring
+ namespace: demo
+spec:
+ version: "2022-cu12"
+ replicas: 1
+ tls:
+ issuerRef:
+ name: mssqlserver-ca-issuer
+ kind: Issuer
+ apiGroup: "cert-manager.io"
+ clientTLS: false
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
+ monitor:
+ agent: prometheus.io/operator
+ prometheus:
+ exporter:
+ port: 9399
+ resources:
+ limits:
+ memory: 512Mi
+ requests:
+ cpu: 200m
+ memory: 256Mi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ runAsGroup: 10001
+ runAsNonRoot: true
+ runAsUser: 10001
+ seccompProfile:
+ type: RuntimeDefault
+ serviceMonitor:
+ interval: 10s
+ labels:
+ release: prometheus
+ storageType: Durable
+ storage:
+ storageClassName: standard
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ deletionPolicy: WipeOut
+```
+
+Here,
+
+- `monitor.agent: prometheus.io/operator` indicates that we are going to monitor this server using Prometheus operator.
+
+- `monitor.prometheus.serviceMonitor.labels` specifies that KubeDB should create `ServiceMonitor` with these labels.
+
+- `monitor.prometheus.interval` indicates that the Prometheus server should scrape metrics from this database with 10 seconds interval.
+
+Let's create the MSSQLServer object that we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mssqlserver/monitoring/mssql-monitoring.yaml
+mssqlserverql.kubedb.com/mssql-monitoring created
+```
+
+Now, wait for the database to go into `Ready` state.
+
+```bash
+$ kubectl get ms -n demo mssql-monitoring
+NAME VERSION STATUS AGE
+mssql-monitoring 2022-cu12 Ready 108m
+```
+
+KubeDB will create a separate stats service with name `{mssqlserver cr name}-stats` for monitoring purpose.
+
+```bash
+$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=mssql-monitoring"
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+mssql-monitoring ClusterIP 10.96.225.130 1433/TCP 108m
+mssql-monitoring-pods ClusterIP None 1433/TCP 108m
+mssql-monitoring-stats ClusterIP 10.96.147.93 9399/TCP 108m
+```
+
+Here, `mssql-monitoring-stats` service has been created for monitoring purpose.
+
+Let's describe this stats service.
+
+
+```bash
+$ kubectl describe svc -n demo mssql-monitoring-stats
+```
+```yaml
+Name: mssql-monitoring-stats
+Namespace: demo
+Labels: app.kubernetes.io/component=database
+ app.kubernetes.io/instance=mssql-monitoring
+ app.kubernetes.io/managed-by=kubedb.com
+ app.kubernetes.io/name=mssqlservers.kubedb.com
+ kubedb.com/role=stats
+Annotations: monitoring.appscode.com/agent: prometheus.io/operator
+Selector: app.kubernetes.io/instance=mssql-monitoring,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=mssqlservers.kubedb.com
+Type: ClusterIP
+IP Family Policy: SingleStack
+IP Families: IPv4
+IP: 10.96.147.93
+IPs: 10.96.147.93
+Port: metrics 9399/TCP
+TargetPort: metrics/TCP
+Endpoints: 10.244.0.47:9399
+Session Affinity: None
+Events:
+```
+
+Notice the `Labels` and `Port` fields. `ServiceMonitor` will use these information to target its endpoints.
+
+KubeDB will also create a `ServiceMonitor` CR in `demo` namespace that select the endpoints of `mssql-monitoring-stats` service. Verify that the `ServiceMonitor` CR has been created.
+
+```bash
+$ kubectl get servicemonitor -n demo
+NAME AGE
+mssql-monitoring-stats 110m
+```
+
+Let's verify that the `ServiceMonitor` has the label that we had specified in `spec.monitor` section of MSSQLServer CR.
+
+```bash
+$ kubectl get servicemonitor -n demo mssql-monitoring-stats -o yaml
+```
+
+```yaml
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+ creationTimestamp: "2024-10-31T07:38:36Z"
+ generation: 1
+ labels:
+ app.kubernetes.io/component: database
+ app.kubernetes.io/instance: mssql-monitoring
+ app.kubernetes.io/managed-by: kubedb.com
+ app.kubernetes.io/name: mssqlservers.kubedb.com
+ release: prometheus
+ name: mssql-monitoring-stats
+ namespace: demo
+ ownerReferences:
+ - apiVersion: v1
+ blockOwnerDeletion: true
+ controller: true
+ kind: Service
+ name: mssql-monitoring-stats
+ uid: 99193679-301b-41fd-aae5-a732b3070d19
+ resourceVersion: "1004080"
+ uid: 87635ad4-dfb2-4544-89af-e48b40783205
+spec:
+ endpoints:
+ - honorLabels: true
+ interval: 10s
+ path: /metrics
+ port: metrics
+ namespaceSelector:
+ matchNames:
+ - demo
+ selector:
+ matchLabels:
+ app.kubernetes.io/component: database
+ app.kubernetes.io/instance: mssql-monitoring
+ app.kubernetes.io/managed-by: kubedb.com
+ app.kubernetes.io/name: mssqlservers.kubedb.com
+ kubedb.com/role: stats
+```
+
+Notice that the `ServiceMonitor` has label `release: prometheus` that we had specified in MSSQLServer CR.
+
+Also notice that the `ServiceMonitor` has selector which match the labels we have seen in the `mssql-monitoring-stats` service. It also, target the `metrics` port that we have seen in the stats service.
+
+## Verify Monitoring Metrics
+
+At first, let's find out the respective Prometheus pod for `prometheus-kube-prometheus-prometheus` Prometheus server.
+
+```bash
+$ kubectl get pod -n monitoring -l=app.kubernetes.io/name=prometheus
+NAME READY STATUS RESTARTS AGE
+prometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 1 16d
+```
+
+Prometheus server is listening to port `9090` of `prometheus-prometheus-kube-prometheus-prometheus-0` pod. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard.
+
+Run following command on a separate terminal to forward the port 9090 of `prometheus-prometheus-0` pod,
+
+```bash
+$ kubectl port-forward -n monitoring prometheus-prometheus-kube-prometheus-prometheus-0 9090
+Forwarding from 127.0.0.1:9090 -> 9090
+Forwarding from [::1]:9090 -> 9090
+```
+
+Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see `metrics` endpoint of `mssql-monitoring-stats` service as one of the targets.
+
+
+
+
+
+Check the `endpoint` and `service` labels. It verifies that the target is our expected database. Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboards with collected metrics.
+
+# Grafana Dashboards
+
+There are three dashboards to monitor Microsoft SQL Server Databases managed by KubeDB.
+
+- KubeDB / MSSQLServer / Summary: Shows overall summary of Microsoft SQL Server instance.
+- KubeDB / MSSQLServer / Pod: Shows individual pod-level information.
+- KubeDB / MSSQLServer / Database: Shows Microsoft SQL Server internal metrics for an instance.
+> Note: These dashboards are developed in Grafana version 7.5.5
+
+
+To use KubeDB `Grafana Dashboards` to monitor Microsoft SQL Server Databases managed by `KubeDB`, Check out [mssqlserver-dashboards](https://github.com/ops-center/grafana-dashboards/tree/master/mssqlserver)
+
+## Cleaning up
+
+To clean up the Kubernetes resources created by this tutorial, run following commands
+
+```bash
+kubectl delete -n demo ms/mssql-monitoring
+kubectl delete ns demo
+
+helm uninstall prometheus -n monitoring
+kubectl delete ns monitoring
+```
+
+## Next Steps
+- Learn about [backup and restore](/docs/guides/mssqlserver/backup/overview/index.md) SQL Server using KubeStash.
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
diff --git a/docs/guides/mssqlserver/pitr/archiver.md b/docs/guides/mssqlserver/pitr/archiver.md
index 5df20f4170..abe560df34 100644
--- a/docs/guides/mssqlserver/pitr/archiver.md
+++ b/docs/guides/mssqlserver/pitr/archiver.md
@@ -261,18 +261,21 @@ spec:
availabilityGroup:
databases:
- demo
- internalAuth:
- endpointCert:
- issuerRef:
- apiGroup: cert-manager.io
- kind: Issuer
- name: mssqlserver-ca-issuer
tls:
issuerRef:
apiGroup: cert-manager.io
kind: Issuer
name: mssqlserver-ca-issuer
clientTLS: false
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
storageType: Durable
storage:
accessModes:
@@ -476,18 +479,21 @@ spec:
replicas: 2
topology:
mode: AvailabilityGroup
- internalAuth:
- endpointCert:
- issuerRef:
- apiGroup: cert-manager.io
- name: mssqlserver-ca-issuer
- kind: Issuer
tls:
issuerRef:
name: mssqlserver-ca-issuer
kind: Issuer
apiGroup: cert-manager.io
clientTLS: false
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
storageType: Durable
storage:
accessModes:
diff --git a/docs/guides/mssqlserver/pitr/examples/restored-mssqlserver-ag.yaml b/docs/guides/mssqlserver/pitr/examples/restored-mssqlserver-ag.yaml
index 266de6e349..5439c7e1bb 100644
--- a/docs/guides/mssqlserver/pitr/examples/restored-mssqlserver-ag.yaml
+++ b/docs/guides/mssqlserver/pitr/examples/restored-mssqlserver-ag.yaml
@@ -19,12 +19,6 @@ spec:
replicas: 2
topology:
mode: AvailabilityGroup
- internalAuth:
- endpointCert:
- issuerRef:
- apiGroup: cert-manager.io
- name: mssqlserver-ca-issuer
- kind: Issuer
tls:
issuerRef:
name: mssqlserver-ca-issuer
@@ -32,6 +26,15 @@ spec:
apiGroup: cert-manager.io
clientTLS: false
storageType: Durable
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
storage:
accessModes:
- ReadWriteOnce
diff --git a/docs/guides/mssqlserver/pitr/examples/sample-mssqlserver-ag.yaml b/docs/guides/mssqlserver/pitr/examples/sample-mssqlserver-ag.yaml
index 006e21f008..8af7bffa95 100644
--- a/docs/guides/mssqlserver/pitr/examples/sample-mssqlserver-ag.yaml
+++ b/docs/guides/mssqlserver/pitr/examples/sample-mssqlserver-ag.yaml
@@ -19,18 +19,21 @@ spec:
availabilityGroup:
databases:
- demo
- internalAuth:
- endpointCert:
- issuerRef:
- apiGroup: cert-manager.io
- kind: Issuer
- name: mssqlserver-ca-issuer
tls:
issuerRef:
apiGroup: cert-manager.io
kind: Issuer
name: mssqlserver-ca-issuer
clientTLS: false
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
storageType: Durable
storage:
accessModes:
diff --git a/docs/guides/mssqlserver/quickstart/quickstart.md b/docs/guides/mssqlserver/quickstart/quickstart.md
index 1ce678b8cd..85fcbfb5f7 100644
--- a/docs/guides/mssqlserver/quickstart/quickstart.md
+++ b/docs/guides/mssqlserver/quickstart/quickstart.md
@@ -97,6 +97,7 @@ issuer.cert-manager.io/mssqlserver-ca-issuer created
```
### Configuring Environment Variables for SQL Server on Linux
+You can use environment variables to configure SQL Server on Linux containers.
When deploying `Microsoft SQL Server` on Linux using `containers`, you need to specify the `product edition` through the [MSSQL_PID](https://mcr.microsoft.com/en-us/product/mssql/server/about#configuration:~:text=MSSQL_PID%20is%20the,documentation%20here.) environment variable. This variable determines which `SQL Server edition` will run inside the container. The acceptable values for `MSSQL_PID` are:
`Developer`: This will run the container using the Developer Edition (this is the default if no MSSQL_PID environment variable is supplied)
`Express`: This will run the container using the Express Edition
@@ -105,9 +106,11 @@ When deploying `Microsoft SQL Server` on Linux using `containers`, you need to s
`EnterpriseCore`: This will run the container using the Enterprise Edition Core
``: This will run the container with the edition that is associated with the PID
+`ACCEPT_EULA` confirms your acceptance of the [End-User Licensing Agreement](https://go.microsoft.com/fwlink/?linkid=857698).
+
For a complete list of environment variables that can be used, refer to the documentation [here](https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-configure-environment-variables?view=sql-server-2017).
-Below is an example of how to configure the `MSSQL_PID` environment variable in the KubeDB MSSQLServer Custom Resource Definition (CRD):
+Below is an example of how to configure the `MSSQL_PID` and `ACCEPT_EULA` environment variable in the KubeDB MSSQLServer Custom Resource Definition (CRD):
```bash
metadata:
name: mssqlserver
@@ -116,10 +119,12 @@ spec:
podTemplate:
spec:
containers:
- - name: mssql
- env:
- - name: MSSQL_PID
- value: Enterprise
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Enterprise
```
In this example, the SQL Server container will run the Enterprise Edition.
@@ -146,6 +151,15 @@ spec:
kind: Issuer
apiGroup: "cert-manager.io"
clientTLS: false
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
storage:
storageClassName: "standard"
accessModes:
@@ -220,8 +234,6 @@ metadata:
spec:
authSecret:
name: mssqlserver-quickstart-auth
- coordinator:
- resources: {}
deletionPolicy: WipeOut
healthChecker:
failureThreshold: 1
diff --git a/docs/guides/mssqlserver/restart/_index.md b/docs/guides/mssqlserver/restart/_index.md
new file mode 100644
index 0000000000..cbd526b865
--- /dev/null
+++ b/docs/guides/mssqlserver/restart/_index.md
@@ -0,0 +1,10 @@
+---
+title: Restart MSSQLServer
+menu:
+ docs_{{ .version }}:
+ identifier: ms-restart
+ name: Restart
+ parent: guides-mssqlserver
+ weight: 46
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/mssqlserver/restart/restart.md b/docs/guides/mssqlserver/restart/restart.md
new file mode 100644
index 0000000000..3912fa9c49
--- /dev/null
+++ b/docs/guides/mssqlserver/restart/restart.md
@@ -0,0 +1,279 @@
+---
+title: Restart MSSQLServer
+menu:
+ docs_{{ .version }}:
+ identifier: ms-restart-guide
+ name: Restart MSSQLServer
+ parent: ms-restart
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Restart MSSQLServer
+
+KubeDB supports restarting the MSSQLServer via a MSSQLServerOpsRequest. Restarting is useful if some pods are got stuck in some phase, or they are not working correctly. This tutorial will show you how to use that.
+
+## Before You Begin
+
+- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). Make sure install with helm command including `--set global.featureGates.MSSQLServer=true` to ensure MSSQLServer CRD installation.
+
+- To configure TLS/SSL in `MSSQLServer`, `KubeDB` uses `cert-manager` to issue certificates. So first you have to make sure that the cluster has `cert-manager` installed. To install `cert-manager` in your cluster following steps [here](https://cert-manager.io/docs/installation/kubernetes/).
+
+- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial.
+
+```bash
+ $ kubectl create ns demo
+ namespace/demo created
+ ```
+
+> Note: YAML files used in this tutorial are stored in [docs/examples/mssqlserver](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mssqlserver) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
+
+## Deploy MSSQLServer
+
+First, an issuer needs to be created, even if TLS is not enabled for SQL Server. The issuer will be used to configure the TLS-enabled Wal-G proxy server, which is required for the SQL Server backup and restore operations.
+
+### Create Issuer/ClusterIssuer
+
+Now, we are going to create an example `Issuer` that will be used throughout the duration of this tutorial. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. By following the below steps, we are going to create our desired issuer,
+
+- Start off by generating our ca-certificates using openssl,
+```bash
+openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=MSSQLServer/O=kubedb"
+```
+- Create a secret using the certificate files we have just generated,
+```bash
+$ kubectl create secret tls mssqlserver-ca --cert=ca.crt --key=ca.key --namespace=demo
+secret/mssqlserver-ca created
+```
+Now, we are going to create an `Issuer` using the `mssqlserver-ca` secret that contains the ca-certificate we have just created. Below is the YAML of the `Issuer` CR that we are going to create,
+
+```yaml
+apiVersion: cert-manager.io/v1
+kind: Issuer
+metadata:
+ name: mssqlserver-ca-issuer
+ namespace: demo
+spec:
+ ca:
+ secretName: mssqlserver-ca
+```
+
+Let’s create the `Issuer` CR we have shown above,
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mssqlserver/ag-cluster/mssqlserver-ca-issuer.yaml
+issuer.cert-manager.io/mssqlserver-ca-issuer created
+```
+
+In this section, we are going to deploy a MSSQLServer database using KubeDB.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: MSSQLServer
+metadata:
+ name: mssqlserver-ag-cluster
+ namespace: demo
+spec:
+ version: "2022-cu12"
+ replicas: 3
+ topology:
+ mode: AvailabilityGroup
+ availabilityGroup:
+ databases:
+ - agdb1
+ - agdb2
+ tls:
+ issuerRef:
+ name: mssqlserver-ca-issuer
+ kind: Issuer
+ apiGroup: "cert-manager.io"
+ clientTLS: false
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
+ storageType: Durable
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ deletionPolicy: WipeOut
+```
+
+Let's create the `MSSQLServer` CR we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mssqlserver/restart/mssqlserver-ag-cluster.yaml
+mssqlserver.kubedb.com/mssqlserver-ag-cluster created
+```
+
+Check the database is provisioned successfully
+```bash
+$ kubectl get ms -n demo mssqlserver-ag-cluster
+NAME VERSION STATUS AGE
+mssqlserver-ag-cluster 2022-cu12 Ready 4m
+```
+
+
+## Apply Restart opsRequest
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: MSSQLServerOpsRequest
+metadata:
+ name: msops-restart
+ namespace: demo
+spec:
+ type: Restart
+ databaseRef:
+ name: mssqlserver-ag-cluster
+ timeout: 3m
+ apply: Always
+```
+
+- `spec.type` specifies the Type of the ops Request
+- `spec.databaseRef` holds the name of the MSSQLServer database. The db should be available in the same namespace as the opsRequest
+- The meaning of `spec.timeout` & `spec.apply` fields can be found [here](/docs/guides/mssqlserver/concepts/opsrequest.md)
+
+> Note: The method of restarting the standalone & cluster mode db is exactly same as above. All you need, is to specify the corresponding MSSQLServer name in `spec.databaseRef.name` section.
+
+Let's create the `MSSQLServerOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mssqlserver/restart/msops-restart.yaml
+mssqlserveropsrequest.ops.kubedb.com/msops-restart created
+```
+
+Now the Ops-manager operator will first restart the general secondary pods and lastly will restart the Primary pod of the database.
+
+```shell
+$ kubectl get msops -n demo msops-restart
+NAME TYPE STATUS AGE
+msops-restart Restart Successful 5m23s
+
+$ kubectl get msops -n demo msops-restart -oyaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: MSSQLServerOpsRequest
+metadata:
+ annotations:
+ kubectl.kubernetes.io/last-applied-configuration: |
+ {"apiVersion":"ops.kubedb.com/v1alpha1","kind":"MSSQLServerOpsRequest","metadata":{"annotations":{},"name":"msops-restart","namespace":"demo"},"spec":{"apply":"Always","databaseRef":{"name":"mssqlserver-ag-cluster"},"timeout":"3m","type":"Restart"}}
+ creationTimestamp: "2024-10-25T06:58:21Z"
+ generation: 1
+ name: msops-restart
+ namespace: demo
+ resourceVersion: "771141"
+ uid: 9e531521-c369-4ce4-983f-a3dafd90cb8a
+spec:
+ apply: Always
+ databaseRef:
+ name: mssqlserver-ag-cluster
+ timeout: 3m
+ type: Restart
+status:
+ conditions:
+ - lastTransitionTime: "2024-10-25T06:58:21Z"
+ message: MSSQLServerOpsRequest has started to restart MSSQLServer nodes
+ observedGeneration: 1
+ reason: Restart
+ status: "True"
+ type: Restart
+ - lastTransitionTime: "2024-10-25T06:58:45Z"
+ message: get pod; ConditionStatus:True; PodName:mssqlserver-ag-cluster-0
+ observedGeneration: 1
+ status: "True"
+ type: GetPod--mssqlserver-ag-cluster-0
+ - lastTransitionTime: "2024-10-25T06:58:45Z"
+ message: evict pod; ConditionStatus:True; PodName:mssqlserver-ag-cluster-0
+ observedGeneration: 1
+ status: "True"
+ type: EvictPod--mssqlserver-ag-cluster-0
+ - lastTransitionTime: "2024-10-25T06:59:20Z"
+ message: check pod running; ConditionStatus:True; PodName:mssqlserver-ag-cluster-0
+ observedGeneration: 1
+ status: "True"
+ type: CheckPodRunning--mssqlserver-ag-cluster-0
+ - lastTransitionTime: "2024-10-25T06:59:25Z"
+ message: get pod; ConditionStatus:True; PodName:mssqlserver-ag-cluster-1
+ observedGeneration: 1
+ status: "True"
+ type: GetPod--mssqlserver-ag-cluster-1
+ - lastTransitionTime: "2024-10-25T06:59:25Z"
+ message: evict pod; ConditionStatus:True; PodName:mssqlserver-ag-cluster-1
+ observedGeneration: 1
+ status: "True"
+ type: EvictPod--mssqlserver-ag-cluster-1
+ - lastTransitionTime: "2024-10-25T07:00:00Z"
+ message: check pod running; ConditionStatus:True; PodName:mssqlserver-ag-cluster-1
+ observedGeneration: 1
+ status: "True"
+ type: CheckPodRunning--mssqlserver-ag-cluster-1
+ - lastTransitionTime: "2024-10-25T07:00:05Z"
+ message: get pod; ConditionStatus:True; PodName:mssqlserver-ag-cluster-2
+ observedGeneration: 1
+ status: "True"
+ type: GetPod--mssqlserver-ag-cluster-2
+ - lastTransitionTime: "2024-10-25T07:00:05Z"
+ message: evict pod; ConditionStatus:True; PodName:mssqlserver-ag-cluster-2
+ observedGeneration: 1
+ status: "True"
+ type: EvictPod--mssqlserver-ag-cluster-2
+ - lastTransitionTime: "2024-10-25T07:00:40Z"
+ message: check pod running; ConditionStatus:True; PodName:mssqlserver-ag-cluster-2
+ observedGeneration: 1
+ status: "True"
+ type: CheckPodRunning--mssqlserver-ag-cluster-2
+ - lastTransitionTime: "2024-10-25T07:00:45Z"
+ message: Successfully restarted MSSQLServer nodes
+ observedGeneration: 1
+ reason: RestartNodes
+ status: "True"
+ type: RestartNodes
+ - lastTransitionTime: "2024-10-25T07:00:45Z"
+ message: Controller has successfully restart the MSSQLServer replicas
+ observedGeneration: 1
+ reason: Successful
+ status: "True"
+ type: Successful
+ observedGeneration: 1
+ phase: Successful
+```
+
+We can see that, the database is ready after restarting the pods
+```bash
+$ kubectl get ms -n demo mssqlserver-ag-cluster
+NAME VERSION STATUS AGE
+mssqlserver-ag-cluster 2022-cu12 Ready 14m
+```
+
+## Cleaning up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+```bash
+kubectl delete mssqlserveropsrequest -n demo msops-restart
+kubectl delete mssqlserver -n demo mssqlserver-ag-cluster
+kubectl delete issuer -n demo mssqlserver-ca-issuer
+kubectl delete secret -n demo mssqlserver-ca
+kubectl delete ns demo
+```
+
+## Next Steps
+
+- Learn about [backup and restore](/docs/guides/mssqlserver/backup/overview/index.md) MSSQLServer database using KubeStash.
+- Want to set up MSSQLServer cluster? Check how to [Configure SQL Server Availability Group Cluster](/docs/guides/mssqlserver/clustering/ag_cluster.md)
+- Detail concepts of [MSSQLServer Object](/docs/guides/mssqlserver/concepts/mssqlserver.md).
+
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
diff --git a/docs/guides/mssqlserver/scaling/_index.md b/docs/guides/mssqlserver/scaling/_index.md
new file mode 100644
index 0000000000..fcf648eaf6
--- /dev/null
+++ b/docs/guides/mssqlserver/scaling/_index.md
@@ -0,0 +1,10 @@
+---
+title: Scaling Microsoft SQL Server
+menu:
+ docs_{{ .version }}:
+ identifier: ms-scaling
+ name: Scaling Microsoft SQL Server
+ parent: guides-mssqlserver
+ weight: 43
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/mssqlserver/scaling/horizontal-scaling/_index.md b/docs/guides/mssqlserver/scaling/horizontal-scaling/_index.md
new file mode 100644
index 0000000000..d6c1d24e0f
--- /dev/null
+++ b/docs/guides/mssqlserver/scaling/horizontal-scaling/_index.md
@@ -0,0 +1,10 @@
+---
+title: Horizontal Scaling
+menu:
+ docs_{{ .version }}:
+ identifier: ms-scaling-horizontal
+ name: Horizontal Scaling
+ parent: ms-scaling
+ weight: 10
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/mssqlserver/scaling/horizontal-scaling/mssqlserver.md b/docs/guides/mssqlserver/scaling/horizontal-scaling/mssqlserver.md
new file mode 100644
index 0000000000..a1f996f54d
--- /dev/null
+++ b/docs/guides/mssqlserver/scaling/horizontal-scaling/mssqlserver.md
@@ -0,0 +1,714 @@
+---
+title: Horizontal Scaling MSSQLServer Cluster
+menu:
+ docs_{{ .version }}:
+ identifier: ms-scaling-horizontal-guide
+ name: Scale Horizontally
+ parent: ms-scaling-horizontal
+ weight: 20
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Horizontal Scale MSSQLServer Cluster
+
+This guide will show you how to use `KubeDB` Ops Manager to increase/decrease the number of replicas of a `MSSQLServer` Cluster.
+
+## Before You Begin
+
+- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). Make sure install with helm command including `--set global.featureGates.MSSQLServer=true` to ensure MSSQLServer CRD installation.
+
+- To configure TLS/SSL in `MSSQLServer`, `KubeDB` uses `cert-manager` to issue certificates. So first you have to make sure that the cluster has `cert-manager` installed. To install `cert-manager` in your cluster following steps [here](https://cert-manager.io/docs/installation/kubernetes/).
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [MSSQLServer](/docs/guides/mssqlserver/concepts/mssqlserver.md)
+ - [MSSQLServerOpsRequest](/docs/guides/mssqlserver/concepts/opsrequest.md)
+ - [Horizontal Scaling Overview](/docs/guides/mssqlserver/scaling/horizontal-scaling/overview.md)
+
+To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial.
+
+```bash
+$ kubectl create ns demo
+namespace/demo created
+```
+
+> **Note:** YAML files used in this tutorial are stored in [docs/examples/mssqlserver/scaling/horizontal-scaling](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mssqlserver/scaling/horizontal-scaling) directory of [kubedb/doc](https://github.com/kubedb/docs) repository.
+
+### Apply Horizontal Scaling on MSSQLServer Cluster
+
+Here, we are going to deploy a `MSSQLServer` Cluster using a supported version by `KubeDB` operator. Then we are going to apply horizontal scaling on it.
+
+#### Prepare Cluster
+
+At first, we are going to deploy a Cluster server with 2 replicas. Then, we are going to add two additional replicas through horizontal scaling. Finally, we will remove 1 replica from the cluster again via horizontal scaling.
+
+**Find supported MSSQLServer Version:**
+
+When you have installed `KubeDB`, it has created `MSSQLServerVersion` CR for all supported `MSSQLServer` versions. Let's check the supported MSSQLServer versions,
+
+```bash
+$ kubectl get mssqlserverversion
+NAME VERSION DB_IMAGE DEPRECATED AGE
+2022-cu12 2022 mcr.microsoft.com/mssql/server:2022-CU12-ubuntu-22.04 176m
+2022-cu14 2022 mcr.microsoft.com/mssql/server:2022-CU14-ubuntu-22.04 176m
+```
+
+The version above that does not show `DEPRECATED` `true` is supported by `KubeDB` for `MSSQLServer`. You can use any non-deprecated version. Here, we are going to create a MSSQLServer Cluster using `MSSQLServer` `2022-cu12`.
+
+**Deploy MSSQLServer Cluster:**
+
+
+First, an issuer needs to be created, even if TLS is not enabled for SQL Server. The issuer will be used to configure the TLS-enabled Wal-G proxy server, which is required for the SQL Server backup and restore operations.
+
+### Create Issuer/ClusterIssuer
+
+Now, we are going to create an example `Issuer` that will be used throughout the duration of this tutorial. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. By following the below steps, we are going to create our desired issuer,
+
+- Start off by generating our ca-certificates using openssl,
+```bash
+openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=MSSQLServer/O=kubedb"
+```
+- Create a secret using the certificate files we have just generated,
+```bash
+$ kubectl create secret tls mssqlserver-ca --cert=ca.crt --key=ca.key --namespace=demo
+secret/mssqlserver-ca created
+```
+Now, we are going to create an `Issuer` using the `mssqlserver-ca` secret that contains the ca-certificate we have just created. Below is the YAML of the `Issuer` CR that we are going to create,
+
+```yaml
+apiVersion: cert-manager.io/v1
+kind: Issuer
+metadata:
+ name: mssqlserver-ca-issuer
+ namespace: demo
+spec:
+ ca:
+ secretName: mssqlserver-ca
+```
+
+Let’s create the `Issuer` CR we have shown above,
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mssqlserver/ag-cluster/mssqlserver-ca-issuer.yaml
+issuer.cert-manager.io/mssqlserver-ca-issuer created
+```
+
+In this section, we are going to deploy a MSSQLServer Cluster with 2 replicas. Then, in the next section we will scale up the cluster using horizontal scaling. Below is the YAML of the `MSSQLServer` CR that we are going to create,
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: MSSQLServer
+metadata:
+ name: mssql-ag-cluster
+ namespace: demo
+spec:
+ version: "2022-cu12"
+ replicas: 2
+ topology:
+ mode: AvailabilityGroup
+ availabilityGroup:
+ databases:
+ - agdb1
+ - agdb2
+ tls:
+ issuerRef:
+ name: mssqlserver-ca-issuer
+ kind: Issuer
+ apiGroup: "cert-manager.io"
+ clientTLS: false
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
+ resources:
+ requests:
+ cpu: "500m"
+ memory: "1.5Gi"
+ limits:
+ cpu: 1
+ memory: "2Gi"
+ storageType: Durable
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ deletionPolicy: WipeOut
+```
+
+Let's create the `MSSQLServer` CR we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mssqlserver/scaling/horizontal-scaling/mssql-ag-cluster.yaml
+mssqlserver.kubedb.com/mssql-ag-cluster created
+```
+
+**Wait for the cluster to be ready:**
+
+`KubeDB` operator watches for `MSSQLServer` objects using Kubernetes API. When a `MSSQLServer` object is created, `KubeDB` operator will create a new PetSet, Services, and Secrets, etc. A secret called `mssql-ag-cluster-auth` (format: {mssqlserver-object-name}-auth) will be created storing the password for mssqlserver superuser.
+Now, watch `MSSQLServer` is going to `Running` state and also watch `PetSet` and its pod is created and going to `Running` state,
+
+```bash
+$ watch kubectl get ms,petset,pods -n demo
+Every 2.0s: kubectl get ms,petset,pods -n demo
+
+NAME VERSION STATUS AGE
+mssqlserver.kubedb.com/mssql-ag-cluster 2022-cu12 Ready 2m52s
+
+NAME AGE
+petset.apps.k8s.appscode.com/mssql-ag-cluster 2m11s
+
+NAME READY STATUS RESTARTS AGE
+pod/mssql-ag-cluster-0 2/2 Running 0 2m11s
+pod/mssql-ag-cluster-1 2/2 Running 0 2m6s
+
+```
+
+Let's verify that the PetSet's pods have created the availability group cluster successfully,
+
+```bash
+$ kubectl get secrets -n demo mssql-ag-cluster-auth -o jsonpath='{.data.\username}' | base64 -d
+sa
+$ kubectl get secrets -n demo mssql-ag-cluster-auth -o jsonpath='{.data.\password}' | base64 -d
+123KKxgOXuOkP206
+```
+
+Now, connect to the database using username and password, check the name of the created availability group, replicas of the availability group and see if databases are added to the availability group.
+```bash
+$ kubectl exec -it -n demo mssql-ag-cluster-0 -c mssql -- bash
+mssql@mssql-ag-cluster-2:/$ /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P "123KKxgOXuOkP206"
+1> select name from sys.databases
+2> go
+name
+----------------------------------------------------------------------------------
+master
+tempdb
+model
+msdb
+agdb1
+agdb2
+kubedb_system
+
+(5 rows affected)
+1> SELECT name FROM sys.availability_groups
+2> go
+name
+----------------------------------------------------------------------------
+mssqlagcluster
+
+(1 rows affected)
+1> select replica_server_name from sys.availability_replicas;
+2> go
+replica_server_name
+-------------------------------------------------------------------------------------------
+mssql-ag-cluster-0
+mssql-ag-cluster-1
+(3 rows affected)
+1> select database_name from sys.availability_databases_cluster;
+2> go
+database_name
+------------------------------------------------------------------------------------------
+agdb1
+agdb2
+
+(2 rows affected)
+
+```
+
+
+So, we can see that our cluster has 2 replicas. Now, we are ready to apply the horizontal scale to this MSSQLServer cluster.
+
+#### Scale Up
+
+Here, we are going to add 1 replica in our Cluster using horizontal scaling.
+
+**Create MSSQLServerOpsRequest:**
+
+To scale up your cluster, you have to create a `MSSQLServerOpsRequest` CR with your desired number of replicas after scaling. Below is the YAML of the `MSSQLServerOpsRequest` CR that we are going to create,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: MSSQLServerOpsRequest
+metadata:
+ name: ms-scale-horizontal
+ namespace: demo
+spec:
+ type: HorizontalScaling
+ databaseRef:
+ name: mssql-ag-cluster
+ horizontalScaling:
+ replicas: 3
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing operation on `mssql-ag-cluster`.
+- `spec.type` specifies that we are performing `HorizontalScaling` on our database.
+- `spec.horizontalScaling.replicas` specifies the expected number of replicas after the scaling.
+
+Let's create the `MSSQLServerOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mssqlserver/scaling/horizontal-scaling/msops-hscale-up.yaml
+mssqlserveropsrequest.ops.kubedb.com/msops-hscale-up created
+```
+
+**Verify Scale-Up Succeeded:**
+
+If everything goes well, `KubeDB` Ops Manager will scale up the PetSet's `Pod`. After the scaling process is completed successfully, the `KubeDB` Ops Manager updates the replicas of the `MSSQLServer` object.
+
+First, we will wait for `MSSQLServerOpsRequest` to be successful. Run the following command to watch `MSSQLServerOpsRequest` cr,
+
+```bash
+$ watch kubectl get mssqlserveropsrequest -n demo msops-hscale-up
+Every 2.0s: kubectl get mssqlserveropsrequest -n demo msops-hscale-up
+
+NAME TYPE STATUS AGE
+msops-hscale-up HorizontalScaling Successful 76s
+
+```
+
+You can see from the above output that the `MSSQLServerOpsRequest` has succeeded. If we describe the `MSSQLServerOpsRequest`, we will see that the `MSSQLServer` cluster is scaled up.
+
+```bash
+kubectl describe mssqlserveropsrequest -n demo msops-hscale-up
+Name: msops-hscale-up
+Namespace: demo
+Labels:
+Annotations:
+API Version: ops.kubedb.com/v1alpha1
+Kind: MSSQLServerOpsRequest
+Metadata:
+ Creation Timestamp: 2024-10-24T15:09:36Z
+ Generation: 1
+ Resource Version: 752963
+ UID: 43193e49-8461-4e14-b1c1-7aaa33d0251a
+Spec:
+ Apply: IfReady
+ Database Ref:
+ Name: mssql-ag-cluster
+ Horizontal Scaling:
+ Replicas: 3
+ Type: HorizontalScaling
+Status:
+ Conditions:
+ Last Transition Time: 2024-10-24T15:09:36Z
+ Message: MSSQLServer ops-request has started to horizontally scaling the nodes
+ Observed Generation: 1
+ Reason: HorizontalScaling
+ Status: True
+ Type: HorizontalScaling
+ Last Transition Time: 2024-10-24T15:09:39Z
+ Message: Successfully paused database
+ Observed Generation: 1
+ Reason: DatabasePauseSucceeded
+ Status: True
+ Type: DatabasePauseSucceeded
+ Last Transition Time: 2024-10-24T15:10:29Z
+ Message: Successfully Scaled Up Node
+ Observed Generation: 1
+ Reason: HorizontalScaleUp
+ Status: True
+ Type: HorizontalScaleUp
+ Last Transition Time: 2024-10-24T15:09:44Z
+ Message: get current leader; ConditionStatus:True; PodName:mssql-ag-cluster-0
+ Observed Generation: 1
+ Status: True
+ Type: GetCurrentLeader--mssql-ag-cluster-0
+ Last Transition Time: 2024-10-24T15:09:44Z
+ Message: get raft node; ConditionStatus:True; PodName:mssql-ag-cluster-0
+ Observed Generation: 1
+ Status: True
+ Type: GetRaftNode--mssql-ag-cluster-0
+ Last Transition Time: 2024-10-24T15:09:44Z
+ Message: add raft node; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Observed Generation: 1
+ Status: True
+ Type: AddRaftNode--mssql-ag-cluster-2
+ Last Transition Time: 2024-10-24T15:09:49Z
+ Message: patch petset; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Observed Generation: 1
+ Status: True
+ Type: PatchPetset--mssql-ag-cluster-2
+ Last Transition Time: 2024-10-24T15:09:49Z
+ Message: mssql-ag-cluster already has desired replicas
+ Observed Generation: 1
+ Reason: HorizontalScale
+ Status: True
+ Type: HorizontalScale
+ Last Transition Time: 2024-10-24T15:09:59Z
+ Message: is pod ready; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Observed Generation: 1
+ Status: True
+ Type: IsPodReady--mssql-ag-cluster-2
+ Last Transition Time: 2024-10-24T15:10:19Z
+ Message: is mssql running; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: IsMssqlRunning
+ Last Transition Time: 2024-10-24T15:10:24Z
+ Message: ensure replica join; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: EnsureReplicaJoin
+ Last Transition Time: 2024-10-24T15:10:34Z
+ Message: successfully reconciled the MSSQLServer with modified replicas
+ Observed Generation: 1
+ Reason: UpdatePetSets
+ Status: True
+ Type: UpdatePetSets
+ Last Transition Time: 2024-10-24T15:10:35Z
+ Message: Successfully updated MSSQLServer
+ Observed Generation: 1
+ Reason: UpdateDatabase
+ Status: True
+ Type: UpdateDatabase
+ Last Transition Time: 2024-10-24T15:10:35Z
+ Message: Successfully completed the HorizontalScaling for MSSQLServer
+ Observed Generation: 1
+ Reason: Successful
+ Status: True
+ Type: Successful
+ Observed Generation: 1
+ Phase: Successful
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Starting 2m22s KubeDB Ops-manager Operator Start processing for MSSQLServerOpsRequest: demo/msops-hscale-up
+ Normal Starting 2m22s KubeDB Ops-manager Operator Pausing MSSQLServer database: demo/mssql-ag-cluster
+ Normal Successful 2m22s KubeDB Ops-manager Operator Successfully paused MSSQLServer database: demo/mssql-ag-cluster for MSSQLServerOpsRequest: msops-hscale-up
+ Warning get current leader; ConditionStatus:True; PodName:mssql-ag-cluster-0 2m14s KubeDB Ops-manager Operator get current leader; ConditionStatus:True; PodName:mssql-ag-cluster-0
+ Warning get raft node; ConditionStatus:True; PodName:mssql-ag-cluster-0 2m14s KubeDB Ops-manager Operator get raft node; ConditionStatus:True; PodName:mssql-ag-cluster-0
+ Warning add raft node; ConditionStatus:True; PodName:mssql-ag-cluster-2 2m14s KubeDB Ops-manager Operator add raft node; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Warning get current leader; ConditionStatus:True; PodName:mssql-ag-cluster-0 2m9s KubeDB Ops-manager Operator get current leader; ConditionStatus:True; PodName:mssql-ag-cluster-0
+ Warning get raft node; ConditionStatus:True; PodName:mssql-ag-cluster-0 2m9s KubeDB Ops-manager Operator get raft node; ConditionStatus:True; PodName:mssql-ag-cluster-0
+ Warning patch petset; ConditionStatus:True; PodName:mssql-ag-cluster-2 2m9s KubeDB Ops-manager Operator patch petset; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Warning get current leader; ConditionStatus:True; PodName:mssql-ag-cluster-0 2m4s KubeDB Ops-manager Operator get current leader; ConditionStatus:True; PodName:mssql-ag-cluster-0
+ Warning is pod ready; ConditionStatus:False; PodName:mssql-ag-cluster-2 2m4s KubeDB Ops-manager Operator is pod ready; ConditionStatus:False; PodName:mssql-ag-cluster-2
+ Warning get current leader; ConditionStatus:True; PodName:mssql-ag-cluster-0 119s KubeDB Ops-manager Operator get current leader; ConditionStatus:True; PodName:mssql-ag-cluster-0
+ Warning is pod ready; ConditionStatus:True; PodName:mssql-ag-cluster-2 119s KubeDB Ops-manager Operator is pod ready; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Warning is mssql running; ConditionStatus:False 109s KubeDB Ops-manager Operator is mssql running; ConditionStatus:False
+ Warning get current leader; ConditionStatus:True; PodName:mssql-ag-cluster-0 109s KubeDB Ops-manager Operator get current leader; ConditionStatus:True; PodName:mssql-ag-cluster-0
+ Warning is pod ready; ConditionStatus:True; PodName:mssql-ag-cluster-2 109s KubeDB Ops-manager Operator is pod ready; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Warning get current leader; ConditionStatus:True; PodName:mssql-ag-cluster-0 99s KubeDB Ops-manager Operator get current leader; ConditionStatus:True; PodName:mssql-ag-cluster-0
+ Warning is pod ready; ConditionStatus:True; PodName:mssql-ag-cluster-2 99s KubeDB Ops-manager Operator is pod ready; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Warning is mssql running; ConditionStatus:True 99s KubeDB Ops-manager Operator is mssql running; ConditionStatus:True
+ Warning ensure replica join; ConditionStatus:False 98s KubeDB Ops-manager Operator ensure replica join; ConditionStatus:False
+ Warning get current leader; ConditionStatus:True; PodName:mssql-ag-cluster-0 94s KubeDB Ops-manager Operator get current leader; ConditionStatus:True; PodName:mssql-ag-cluster-0
+ Warning is pod ready; ConditionStatus:True; PodName:mssql-ag-cluster-2 94s KubeDB Ops-manager Operator is pod ready; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Warning is mssql running; ConditionStatus:True 94s KubeDB Ops-manager Operator is mssql running; ConditionStatus:True
+ Warning ensure replica join; ConditionStatus:True 94s KubeDB Ops-manager Operator ensure replica join; ConditionStatus:True
+ Warning get current leader; ConditionStatus:True; PodName:mssql-ag-cluster-0 89s KubeDB Ops-manager Operator get current leader; ConditionStatus:True; PodName:mssql-ag-cluster-0
+ Normal HorizontalScaleUp 89s KubeDB Ops-manager Operator Successfully Scaled Up Node
+ Normal UpdatePetSets 84s KubeDB Ops-manager Operator successfully reconciled the MSSQLServer with modified replicas
+ Normal UpdateDatabase 83s KubeDB Ops-manager Operator Successfully updated MSSQLServer
+ Normal Starting 83s KubeDB Ops-manager Operator Resuming MSSQLServer database: demo/mssql-ag-cluster
+ Normal Successful 83s KubeDB Ops-manager Operator Successfully resumed MSSQLServer database: demo/mssql-ag-cluster for MSSQLServerOpsRequest: msops-hscale-up
+ Normal UpdateDatabase 83s KubeDB Ops-manager Operator Successfully updated MSSQLServer
+```
+
+Now, we are going to verify whether the number of replicas has increased to meet up the desired state. So let's check the new pods coordinator container's logs to see if this is joined in the cluster as new replica.
+
+```bash
+$ kubectl logs -f -n demo mssql-ag-cluster-2 -c mssql-coordinator
+raft2024/10/24 15:09:55 INFO: 3 switched to configuration voters=(1 2 3)
+raft2024/10/24 15:09:55 INFO: 3 switched to configuration voters=(1 2 3)
+raft2024/10/24 15:09:55 INFO: 3 switched to configuration voters=(1 2 3)
+raft2024/10/24 15:09:55 INFO: 3 [term: 1] received a MsgHeartbeat message with higher term from 1 [term: 3]
+raft2024/10/24 15:09:55 INFO: 3 became follower at term 3
+raft2024/10/24 15:09:55 INFO: raft.node: 3 elected leader 1 at term 3
+I1024 15:09:56.855261 1 mssql.go:94] new elected primary is :mssql-ag-cluster-0.
+I1024 15:09:56.864197 1 mssql.go:120] New primary is ready to accept connections...
+I1024 15:09:56.864213 1 mssql.go:171] lastLeaderId : 0, currentLeaderId : 1
+I1024 15:09:56.864230 1 on_leader_change.go:47] New Leader elected.
+I1024 15:09:56.864237 1 on_leader_change.go:82] This pod is now a secondary according to raft
+I1024 15:09:56.864243 1 on_leader_change.go:100] instance demo/mssql-ag-cluster-2 running according to the role
+I1024 15:09:56.864317 1 utils.go:219] /scripts/run_signal.txt file created successfully
+E1024 15:09:56.935767 1 exec_utils.go:65] Error while trying to get process output from the pod. Error: could not execute: command terminated with exit code 1
+I1024 15:09:56.935794 1 on_leader_change.go:110] mssql is not ready yet
+I1024 15:10:07.980792 1 on_leader_change.go:110] mssql is not ready yet
+I1024 15:10:18.049036 1 on_leader_change.go:110] mssql is not ready yet
+I1024 15:10:18.116939 1 on_leader_change.go:118] mssql is ready now
+I1024 15:10:18.127315 1 ag_status.go:43] No Availability Group found
+I1024 15:10:18.127336 1 ag.go:79] Joining Availability Group...
+I1024 15:10:24.638144 1 on_leader_change.go:94] Successfully patched label of demo/mssql-ag-cluster-2 to secondary
+I1024 15:10:24.650611 1 health.go:50] Sequence Number updated. new sequenceNumber = 4294967322, previous sequenceNumber = 0
+I1024 15:10:24.650632 1 health.go:51] 1:1A (4294967322)
+```
+
+
+Now, connect to the database, check updated configurations of the availability group cluster.
+```bash
+$ kubectl exec -it -n demo mssql-ag-cluster-2 -c mssql -- bash
+mssql@mssql-ag-cluster-2:/$ /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P "123KKxgOXuOkP206"
+1> SELECT name FROM sys.availability_groups
+2> go
+name
+----------------------------------------------------------------------------
+mssqlagcluster
+
+(1 rows affected)
+1> select replica_server_name from sys.availability_replicas;
+2> go
+replica_server_name
+-------------------------------------------------------------------------------------------
+mssql-ag-cluster-0
+mssql-ag-cluster-1
+
+mssql-ag-cluster-2
+
+(3 rows affected)
+1> select database_name from sys.availability_databases_cluster;
+2> go
+database_name
+------------------------------------------------------------------------------------------
+agdb1
+agdb2
+
+(2 rows affected)
+```
+
+#### Scale Down
+
+Here, we are going to remove 1 replica from our cluster using horizontal scaling.
+
+**Create MSSQLServerOpsRequest:**
+
+To scale down your cluster, you have to create a `MSSQLServerOpsRequest` CR with your desired number of replicas after scaling. Below is the YAML of the `MSSQLServerOpsRequest` CR that we are going to create,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: MSSQLServerOpsRequest
+metadata:
+ name: msops-hscale-down
+ namespace: demo
+spec:
+ type: HorizontalScaling
+ databaseRef:
+ name: mssql-ag-cluster
+ horizontalScaling:
+ replicas: 2
+```
+
+Let's create the `MSSQLServerOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mssqlserver/scaling/horizontal-scaling/msops-hscale-down.yaml
+mssqlserveropsrequest.ops.kubedb.com/msops-hscale-down created
+```
+
+**Verify Scale-down Succeeded:**
+
+If everything goes well, `KubeDB` Ops Manager will scale down the PetSet's `Pod`. After the scaling process is completed successfully, the `KubeDB` Ops Manager updates the replicas of the `MSSQLServer` object.
+
+Now, we will wait for `MSSQLServerOpsRequest` to be successful. Run the following command to watch `MSSQLServerOpsRequest` cr,
+
+```bash
+$ watch kubectl get mssqlserveropsrequest -n demo msops-hscale-down
+Every 2.0s: kubectl get mssqlserveropsrequest -n demo msops-hscale-down
+
+NAME TYPE STATUS AGE
+msops-hscale-down HorizontalScaling Successful 98s
+```
+
+You can see from the above output that the `MSSQLServerOpsRequest` has succeeded. If we describe the `MSSQLServerOpsRequest`, we shall see that the `MSSQLServer` cluster is scaled down.
+
+```bash
+$ kubectl describe mssqlserveropsrequest -n demo msops-hscale-down
+Name: msops-hscale-down
+Namespace: demo
+Labels:
+Annotations:
+API Version: ops.kubedb.com/v1alpha1
+Kind: MSSQLServerOpsRequest
+Metadata:
+ Creation Timestamp: 2024-10-24T15:22:54Z
+ Generation: 1
+ Resource Version: 754237
+ UID: c5dc6971-5f60-4736-992a-8fdf5a2911d9
+Spec:
+ Apply: IfReady
+ Database Ref:
+ Name: mssql-ag-cluster
+ Horizontal Scaling:
+ Replicas: 2
+ Type: HorizontalScaling
+Status:
+ Conditions:
+ Last Transition Time: 2024-10-24T15:22:54Z
+ Message: MSSQLServer ops-request has started to horizontally scaling the nodes
+ Observed Generation: 1
+ Reason: HorizontalScaling
+ Status: True
+ Type: HorizontalScaling
+ Last Transition Time: 2024-10-24T15:23:06Z
+ Message: Successfully paused database
+ Observed Generation: 1
+ Reason: DatabasePauseSucceeded
+ Status: True
+ Type: DatabasePauseSucceeded
+ Last Transition Time: 2024-10-24T15:24:06Z
+ Message: Successfully Scaled Down Node
+ Observed Generation: 1
+ Reason: HorizontalScaleDown
+ Status: True
+ Type: HorizontalScaleDown
+ Last Transition Time: 2024-10-24T15:23:21Z
+ Message: get current raft leader; ConditionStatus:True; PodName:mssql-ag-cluster-0
+ Observed Generation: 1
+ Status: True
+ Type: GetCurrentRaftLeader--mssql-ag-cluster-0
+ Last Transition Time: 2024-10-24T15:23:11Z
+ Message: get raft node; ConditionStatus:True; PodName:mssql-ag-cluster-0
+ Observed Generation: 1
+ Status: True
+ Type: GetRaftNode--mssql-ag-cluster-0
+ Last Transition Time: 2024-10-24T15:23:11Z
+ Message: remove raft node; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Observed Generation: 1
+ Status: True
+ Type: RemoveRaftNode--mssql-ag-cluster-2
+ Last Transition Time: 2024-10-24T15:23:21Z
+ Message: patch petset; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Observed Generation: 1
+ Status: True
+ Type: PatchPetset--mssql-ag-cluster-2
+ Last Transition Time: 2024-10-24T15:23:21Z
+ Message: mssql-ag-cluster already has desired replicas
+ Observed Generation: 1
+ Reason: HorizontalScale
+ Status: True
+ Type: HorizontalScale
+ Last Transition Time: 2024-10-24T15:23:26Z
+ Message: get pod; ConditionStatus:False
+ Observed Generation: 1
+ Status: False
+ Type: GetPod
+ Last Transition Time: 2024-10-24T15:23:56Z
+ Message: get pod; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--mssql-ag-cluster-2
+ Last Transition Time: 2024-10-24T15:23:56Z
+ Message: delete pvc; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Observed Generation: 1
+ Status: True
+ Type: DeletePvc--mssql-ag-cluster-2
+ Last Transition Time: 2024-10-24T15:24:01Z
+ Message: get pvc; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: GetPvc
+ Last Transition Time: 2024-10-24T15:24:01Z
+ Message: ag node remove; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: AgNodeRemove
+ Last Transition Time: 2024-10-24T15:24:11Z
+ Message: successfully reconciled the MSSQLServer with modified replicas
+ Observed Generation: 1
+ Reason: UpdatePetSets
+ Status: True
+ Type: UpdatePetSets
+ Last Transition Time: 2024-10-24T15:24:11Z
+ Message: Successfully updated MSSQLServer
+ Observed Generation: 1
+ Reason: UpdateDatabase
+ Status: True
+ Type: UpdateDatabase
+ Last Transition Time: 2024-10-24T15:24:11Z
+ Message: Successfully completed the HorizontalScaling for MSSQLServer
+ Observed Generation: 1
+ Reason: Successful
+ Status: True
+ Type: Successful
+ Observed Generation: 1
+ Phase: Successful
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Starting 2m1s KubeDB Ops-manager Operator Start processing for MSSQLServerOpsRequest: demo/msops-hscale-down
+ Normal Starting 2m1s KubeDB Ops-manager Operator Pausing MSSQLServer database: demo/mssql-ag-cluster
+ Normal Successful 2m1s KubeDB Ops-manager Operator Successfully paused MSSQLServer database: demo/mssql-ag-cluster for MSSQLServerOpsRequest: msops-hscale-down
+ Warning get current raft leader; ConditionStatus:True; PodName:mssql-ag-cluster-0 104s KubeDB Ops-manager Operator get current raft leader; ConditionStatus:True; PodName:mssql-ag-cluster-0
+ Warning get raft node; ConditionStatus:True; PodName:mssql-ag-cluster-0 104s KubeDB Ops-manager Operator get raft node; ConditionStatus:True; PodName:mssql-ag-cluster-0
+ Warning remove raft node; ConditionStatus:True; PodName:mssql-ag-cluster-2 104s KubeDB Ops-manager Operator remove raft node; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Warning get current raft leader; ConditionStatus:True; PodName:mssql-ag-cluster-0 94s KubeDB Ops-manager Operator get current raft leader; ConditionStatus:True; PodName:mssql-ag-cluster-0
+ Warning get raft node; ConditionStatus:True; PodName:mssql-ag-cluster-0 94s KubeDB Ops-manager Operator get raft node; ConditionStatus:True; PodName:mssql-ag-cluster-0
+ Warning patch petset; ConditionStatus:True; PodName:mssql-ag-cluster-2 94s KubeDB Ops-manager Operator patch petset; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Warning get pod; ConditionStatus:True; PodName:mssql-ag-cluster-2 59s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Warning delete pvc; ConditionStatus:True; PodName:mssql-ag-cluster-2 59s KubeDB Ops-manager Operator delete pvc; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Warning get pvc; ConditionStatus:False 59s KubeDB Ops-manager Operator get pvc; ConditionStatus:False
+ Warning get pod; ConditionStatus:True; PodName:mssql-ag-cluster-2 54s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Warning delete pvc; ConditionStatus:True; PodName:mssql-ag-cluster-2 54s KubeDB Ops-manager Operator delete pvc; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Warning get pvc; ConditionStatus:True 54s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning ag node remove; ConditionStatus:True 54s KubeDB Ops-manager Operator ag node remove; ConditionStatus:True
+ Normal HorizontalScaleDown 49s KubeDB Ops-manager Operator Successfully Scaled Down Node
+ Normal UpdatePetSets 44s KubeDB Ops-manager Operator successfully reconciled the MSSQLServer with modified replicas
+ Normal UpdateDatabase 44s KubeDB Ops-manager Operator Successfully updated MSSQLServer
+ Normal Starting 44s KubeDB Ops-manager Operator Resuming MSSQLServer database: demo/mssql-ag-cluster
+ Normal Successful 44s KubeDB Ops-manager Operator Successfully resumed MSSQLServer database: demo/mssql-ag-cluster for MSSQLServerOpsRequest: msops-hscale-down
+ Normal UpdateDatabase 44s KubeDB Ops-manager Operator Successfully updated MSSQLServer
+```
+
+Now, we are going to verify whether the number of replicas has decreased to meet up the desired state, Let's check, the mssqlserver status if it's ready then the scale-down is successful.
+
+```bash
+$ kubectl get ms,petset,pods -n demo
+NAME VERSION STATUS AGE
+mssqlserver.kubedb.com/mssql-ag-cluster 2022-cu12 Ready 39m
+
+NAME AGE
+petset.apps.k8s.appscode.com/mssql-ag-cluster 38m
+
+NAME READY STATUS RESTARTS AGE
+pod/mssql-ag-cluster-0 2/2 Running 0 38m
+pod/mssql-ag-cluster-1 2/2 Running 0 38m
+```
+
+
+Now, connect to the database, check updated configurations of the availability group cluster.
+```bash
+$ kubectl exec -it -n demo mssql-ag-cluster-0 -c mssql -- bash
+mssql@mssql-ag-cluster-0:/$ /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P "123KKxgOXuOkP206"
+1> SELECT name FROM sys.availability_groups
+2> go
+name
+----------------------------------------------------
+mssqlagcluster
+
+(1 rows affected)
+1> select replica_server_name from sys.availability_replicas;
+2> go
+replica_server_name
+--------------------------------------
+mssql-ag-cluster-0
+mssql-ag-cluster-1
+
+(2 rows affected)
+```
+
+You can see above that our `MSSQLServer` cluster now has a total of 2 replicas. It verifies that we have successfully scaled down.
+
+## Cleaning Up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+```bash
+kubectl delete ms -n demo mssql-ag-cluster
+kubectl delete mssqlserveropsrequest -n demo msops-hscale-up
+kubectl delete mssqlserveropsrequest -n demo msops-hscale-down
+kubectl delete issuer -n demo mssqlserver-ca-issuer
+kubectl delete secret -n demo mssqlserver-ca
+kubectl delete ns demo
+```
diff --git a/docs/guides/mssqlserver/scaling/horizontal-scaling/overview.md b/docs/guides/mssqlserver/scaling/horizontal-scaling/overview.md
new file mode 100644
index 0000000000..fec27fd65c
--- /dev/null
+++ b/docs/guides/mssqlserver/scaling/horizontal-scaling/overview.md
@@ -0,0 +1,56 @@
+---
+title: MSSQLServer Horizontal Scaling Overview
+menu:
+ docs_{{ .version }}:
+ identifier: ms-scaling-horizontal-overview
+ name: Overview
+ parent: ms-scaling-horizontal
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Horizontal Scaling Overview
+
+This guide will give you an overview of how `KubeDB` Ops Manager scales up/down the number of members of a `MSSQLServer`.
+
+## Before You Begin
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [MSSQLServer](/docs/guides/mssqlserver/concepts/mssqlserver.md)
+ - [MSSQLServerOpsRequest](/docs/guides/mssqlserver/concepts/opsrequest.md)
+
+## How Horizontal Scaling Process Works
+
+The following diagram shows how `KubeDB` Ops Manager used to scale up the number of members of a `MSSQLServer` cluster. Open the image in a new tab to see the enlarged version.
+
+
+
+The horizontal scaling process consists of the following steps:
+
+1. At first, a user creates a `MSSQLServer` CR.
+
+2. `KubeDB` provisioner operator watches for the `MSSQLServer` CR.
+
+3. When it finds one, it creates a `PetSet` and related necessary stuff like secret, service, etc.
+
+4. Then, in order to scale the cluster up or down, the user creates a `MSSQLServerOpsRequest` CR with the desired number of replicas after scaling.
+
+5. `KubeDB` Ops Manager watches for `MSSQLServerOpsRequest`.
+
+6. When it finds one, it halts the `MSSQLServer` object so that the `KubeDB` provisioner operator doesn't perform any operation on the `MSSQLServer` during the scaling process.
+
+7. Then `KubeDB` Ops Manager will add nodes in case of scale up or remove nodes in case of scale down.
+
+8. Then the `KubeDB` Ops Manager will scale the PetSet replicas to reach the expected number of replicas for the cluster.
+
+9. After successful scaling of the PetSet's replica, the `KubeDB` Ops Manager updates the `spec.replicas` field of `MSSQLServer` object to reflect the updated cluster state.
+
+10. After successful scaling of the `MSSQLServer` replicas, the `KubeDB` Ops Manager resumes the `MSSQLServer` object so that the `KubeDB` provisioner operator can resume its usual operations.
+
+In the next doc, we are going to show a step-by-step guide on scaling of a MSSQLServer cluster using Horizontal Scaling.
\ No newline at end of file
diff --git a/docs/guides/mssqlserver/scaling/vertical-scaling/_index.md b/docs/guides/mssqlserver/scaling/vertical-scaling/_index.md
new file mode 100644
index 0000000000..05b1216f53
--- /dev/null
+++ b/docs/guides/mssqlserver/scaling/vertical-scaling/_index.md
@@ -0,0 +1,10 @@
+---
+title: Vertical Scaling
+menu:
+ docs_{{ .version }}:
+ identifier: ms-scaling-vertical
+ name: Vertical Scaling
+ parent: ms-scaling
+ weight: 20
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/mssqlserver/scaling/vertical-scaling/ag_cluster.md b/docs/guides/mssqlserver/scaling/vertical-scaling/ag_cluster.md
new file mode 100644
index 0000000000..e0cec80c92
--- /dev/null
+++ b/docs/guides/mssqlserver/scaling/vertical-scaling/ag_cluster.md
@@ -0,0 +1,454 @@
+---
+title: Vertical Scaling MSSQLServer
+menu:
+ docs_{{ .version }}:
+ identifier: ms-scaling-vertical-ag-cluster
+ name: Availability Group (HA Cluster)
+ parent: ms-scaling-vertical
+ weight: 30
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Vertical Scale SQL Server Availability Group (HA Cluster)
+
+This guide will show you how to use `kubeDB-Ops-Manager` to update the resources of a SQL Server Availability Group Cluster.
+
+## Before You Begin
+
+- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). Make sure install with helm command including `--set global.featureGates.MSSQLServer=true` to ensure MSSQLServer CRD installation.
+
+- To configure TLS/SSL in `MSSQLServer`, `KubeDB` uses `cert-manager` to issue certificates. So first you have to make sure that the cluster has `cert-manager` installed. To install `cert-manager` in your cluster following steps [here](https://cert-manager.io/docs/installation/kubernetes/).
+
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [MSSQLServer](/docs/guides/mssqlserver/concepts/mssqlserver.md)
+ - [MSSQLServerOpsRequest](/docs/guides/mssqlserver/concepts/opsrequest.md)
+ - [Vertical Scaling Overview](/docs/guides/mssqlserver/scaling/vertical-scaling/overview.md)
+
+To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial.
+
+```bash
+$ kubectl create ns demo
+namespace/demo created
+```
+
+> **Note:** YAML files used in this tutorial are stored in [docs/examples/mssqlserver/scaling/vertical-scaling](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mssqlserver/scaling/vertical-scaling) directory of [kubedb/doc](https://github.com/kubedb/docs) repository.
+
+### Apply Vertical Scaling on MSSQLServer Availability Group Cluster
+
+Here, we are going to deploy a `MSSQLServer` instance using a supported version by `KubeDB` operator. Then we are going to apply vertical scaling on it.
+
+**Find supported MSSQLServer Version:**
+
+When you have installed `KubeDB`, it has created `MSSQLServerVersion` CR for all supported `MSSQLServer` versions. Let's check the supported MSSQLServer versions,
+
+```bash
+$ kubectl get mssqlserverversion
+NAME VERSION DB_IMAGE DEPRECATED AGE
+2022-cu12 2022 mcr.microsoft.com/mssql/server:2022-CU12-ubuntu-22.04 3d21h
+2022-cu14 2022 mcr.microsoft.com/mssql/server:2022-CU14-ubuntu-22.04 3d21h
+```
+
+The version above that does not show `DEPRECATED` `true` is supported by `KubeDB` for `MSSQLServer`. You can use any non-deprecated version. Here, we are going to create a mssqlserver using non-deprecated `MSSQLServer` version `2022-cu12`.
+
+
+At first, we need to create an Issuer/ClusterIssuer which will be used to generate the certificate used for TLS configurations.
+
+#### Create Issuer/ClusterIssuer
+
+Now, we are going to create an example `Issuer` that will be used throughout the duration of this tutorial. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. By following the below steps, we are going to create our desired issuer,
+
+- Start off by generating our ca-certificates using openssl,
+```bash
+openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=MSSQLServer/O=kubedb"
+```
+-
+- Create a secret using the certificate files we have just generated,
+```bash
+$ kubectl create secret tls mssqlserver-ca --cert=ca.crt --key=ca.key --namespace=demo
+secret/mssqlserver-ca created
+```
+Now, we are going to create an `Issuer` using the `mssqlserver-ca` secret that contains the ca-certificate we have just created. Below is the YAML of the `Issuer` CR that we are going to create,
+
+```yaml
+apiVersion: cert-manager.io/v1
+kind: Issuer
+metadata:
+ name: mssqlserver-ca-issuer
+ namespace: demo
+spec:
+ ca:
+ secretName: mssqlserver-ca
+```
+
+**Deploy MSSQLServer Availability Group Cluster:**
+
+In this section, we are going to deploy a MSSQLServer instance. Then, in the next section, we will update the resources of the database server using vertical scaling.
+Below is the YAML of the `MSSQLServer` CR that we are going to create,
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: MSSQLServer
+metadata:
+ name: mssql-ag-cluster
+ namespace: demo
+spec:
+ version: "2022-cu12"
+ replicas: 3
+ topology:
+ mode: AvailabilityGroup
+ availabilityGroup:
+ databases:
+ - agdb1
+ - agdb2
+ tls:
+ issuerRef:
+ name: mssqlserver-ca-issuer
+ kind: Issuer
+ apiGroup: "cert-manager.io"
+ clientTLS: false
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
+ resources:
+ requests:
+ cpu: "500m"
+ memory: "1.5Gi"
+ limits:
+ cpu: 1
+ memory: "2Gi"
+ storageType: Durable
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ deletionPolicy: WipeOut
+```
+
+Let's create the `MSSQLServer` CR we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mssqlserver/scaling/vertical-scaling/mssql-ag-cluster.yaml
+mssqlserver.kubedb.com/mssql-ag-cluster created
+```
+
+
+**Check mssqlserver Ready to Scale:**
+
+`KubeDB` watches for `MSSQLServer` objects using Kubernetes API. When a `MSSQLServer` object is created, `KubeDB` will create a new PetSet, Services, and Secrets, etc.
+Now, watch `MSSQLServer` is going to be in `Running` state and also watch `PetSet` and its pod is created and going to be in `Running` state,
+
+
+```bash
+$ watch kubectl get ms,petset,pods -n demo
+Every 2.0s: kubectl get ms,petset,pods -n demo
+
+NAME VERSION STATUS AGE
+mssqlserver.kubedb.com/mssql-ag-cluster 2022-cu12 Ready 4m40s
+
+NAME AGE
+petset.apps.k8s.appscode.com/mssql-ag-cluster 3m57s
+
+NAME READY STATUS RESTARTS AGE
+pod/mssql-ag-cluster-0 2/2 Running 0 3m57s
+pod/mssql-ag-cluster-1 2/2 Running 0 3m51s
+pod/mssql-ag-cluster-2 2/2 Running 0 3m46s
+```
+
+Let's check pod's `mssql` container's resources, `mssql` container is the first container So it's index will be 0.
+
+```bash
+$ kubectl get pod -n demo mssql-ag-cluster-0 -o json | jq '.spec.containers[0].resources'
+{
+ "limits": {
+ "cpu": "1",
+ "memory": "2Gi"
+ },
+ "requests": {
+ "cpu": "500m",
+ "memory": "1536Mi"
+ }
+}
+$ kubectl get pod -n demo mssql-ag-cluster-1 -o json | jq '.spec.containers[0].resources'
+{
+ "limits": {
+ "cpu": "1",
+ "memory": "2Gi"
+ },
+ "requests": {
+ "cpu": "500m",
+ "memory": "1536Mi"
+ }
+}
+$ kubectl get pod -n demo mssql-ag-cluster-2 -o json | jq '.spec.containers[0].resources'
+{
+ "limits": {
+ "cpu": "1",
+ "memory": "2Gi"
+ },
+ "requests": {
+ "cpu": "500m",
+ "memory": "1536Mi"
+ }
+}
+```
+
+Now, We are ready to apply a vertical scale on this mssqlserver database.
+
+#### Vertical Scaling
+
+Here, we are going to update the resources of the mssqlserver to meet up with the desired resources after scaling.
+
+**Create MSSQLServerOpsRequest:**
+
+In order to update the resources of your database, you have to create a `MSSQLServerOpsRequest` CR with your desired resources for scaling. Below is the YAML of the `MSSQLServerOpsRequest` CR that we are going to create,
+
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: MSSQLServerOpsRequest
+metadata:
+ name: mops-vscale-ag-cluster
+ namespace: demo
+spec:
+ type: VerticalScaling
+ databaseRef:
+ name: mssql-ag-cluster
+ verticalScaling:
+ mssqlserver:
+ resources:
+ requests:
+ memory: "1.7Gi"
+ cpu: "700m"
+ limits:
+ cpu: 2
+ memory: "4Gi"
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing operation on `mssql-ag-cluster` database.
+- `spec.type` specifies that we are performing `VerticalScaling` on our database.
+- `spec.VerticalScaling.mssqlserver` specifies the expected `mssql` container resources after scaling.
+
+Let's create the `MSSQLServerOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mssqlserver/scaling/vertical-scaling/mops-vscale-ag-cluster.yaml
+mssqlserveropsrequest.ops.kubedb.com/mops-vscale-ag-cluster created
+```
+
+**Verify MSSQLServer resources updated successfully:**
+
+If everything goes well, `KubeDB-Ops-Manager` will update the resources of the PetSet's `Pod` containers. After a successful scaling process is done, the `KubeDB-Ops-Manager` updates the resources of the `MSSQLServer` object.
+
+First, we will wait for `MSSQLServerOpsRequest` to be successful. Run the following command to watch `MSSQLServerOpsRequest` CR,
+
+```bash
+$ watch kubectl get mssqlserveropsrequest -n demo mops-vscale-ag-cluster
+Every 2.0s: kubectl get mssqlserveropsrequest -n demo mops-vscale-ag-cluster
+
+NAME TYPE STATUS AGE
+mops-vscale-ag-cluster VerticalScaling Successful 7m17s
+```
+
+We can see from the above output that the `MSSQLServerOpsRequest` has succeeded. If we describe the `MSSQLServerOpsRequest`, we will see that the mssqlserver resources are updated.
+
+```bash
+$ kubectl describe mssqlserveropsrequest -n demo mops-vscale-ag-cluster
+Name: mops-vscale-ag-cluster
+Namespace: demo
+Labels:
+Annotations:
+API Version: ops.kubedb.com/v1alpha1
+Kind: MSSQLServerOpsRequest
+Metadata:
+ Creation Timestamp: 2024-10-24T14:13:05Z
+ Generation: 1
+ Resource Version: 747632
+ UID: ed3c5cbc-e74e-46ba-b243-143a6007ac36
+Spec:
+ Apply: IfReady
+ Database Ref:
+ Name: mssql-ag-cluster
+ Type: VerticalScaling
+ Vertical Scaling:
+ Mssqlserver:
+ Resources:
+ Limits:
+ Cpu: 2
+ Memory: 4Gi
+ Requests:
+ Cpu: 700m
+ Memory: 1.7Gi
+Status:
+ Conditions:
+ Last Transition Time: 2024-10-24T14:13:05Z
+ Message: MSSQLServer ops-request has started to vertically scaling the MSSQLServer nodes
+ Observed Generation: 1
+ Reason: VerticalScaling
+ Status: True
+ Type: VerticalScaling
+ Last Transition Time: 2024-10-24T14:13:08Z
+ Message: Successfully paused database
+ Observed Generation: 1
+ Reason: DatabasePauseSucceeded
+ Status: True
+ Type: DatabasePauseSucceeded
+ Last Transition Time: 2024-10-24T14:13:08Z
+ Message: Successfully updated PetSets Resources
+ Observed Generation: 1
+ Reason: UpdatePetSets
+ Status: True
+ Type: UpdatePetSets
+ Last Transition Time: 2024-10-24T14:13:13Z
+ Message: get pod; ConditionStatus:True; PodName:mssql-ag-cluster-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--mssql-ag-cluster-0
+ Last Transition Time: 2024-10-24T14:13:13Z
+ Message: evict pod; ConditionStatus:True; PodName:mssql-ag-cluster-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--mssql-ag-cluster-0
+ Last Transition Time: 2024-10-24T14:13:48Z
+ Message: check pod running; ConditionStatus:True; PodName:mssql-ag-cluster-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--mssql-ag-cluster-0
+ Last Transition Time: 2024-10-24T14:13:53Z
+ Message: get pod; ConditionStatus:True; PodName:mssql-ag-cluster-1
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--mssql-ag-cluster-1
+ Last Transition Time: 2024-10-24T14:13:53Z
+ Message: evict pod; ConditionStatus:True; PodName:mssql-ag-cluster-1
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--mssql-ag-cluster-1
+ Last Transition Time: 2024-10-24T14:14:28Z
+ Message: check pod running; ConditionStatus:True; PodName:mssql-ag-cluster-1
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--mssql-ag-cluster-1
+ Last Transition Time: 2024-10-24T14:14:33Z
+ Message: get pod; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--mssql-ag-cluster-2
+ Last Transition Time: 2024-10-24T14:14:33Z
+ Message: evict pod; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--mssql-ag-cluster-2
+ Last Transition Time: 2024-10-24T14:15:08Z
+ Message: check pod running; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--mssql-ag-cluster-2
+ Last Transition Time: 2024-10-24T14:15:13Z
+ Message: Successfully Restarted Pods With Resources
+ Observed Generation: 1
+ Reason: RestartPods
+ Status: True
+ Type: RestartPods
+ Last Transition Time: 2024-10-24T14:15:13Z
+ Message: Successfully completed the VerticalScaling for MSSQLServer
+ Observed Generation: 1
+ Reason: Successful
+ Status: True
+ Type: Successful
+ Observed Generation: 1
+ Phase: Successful
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Starting 7m46s KubeDB Ops-manager Operator Start processing for MSSQLServerOpsRequest: demo/mops-vscale-ag-cluster
+ Normal Starting 7m46s KubeDB Ops-manager Operator Pausing MSSQLServer database: demo/mssql-ag-cluster
+ Normal Successful 7m46s KubeDB Ops-manager Operator Successfully paused MSSQLServer database: demo/mssql-ag-cluster for MSSQLServerOpsRequest: mops-vscale-ag-cluster
+ Normal UpdatePetSets 7m43s KubeDB Ops-manager Operator Successfully updated PetSets Resources
+ Warning get pod; ConditionStatus:True; PodName:mssql-ag-cluster-0 7m38s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:mssql-ag-cluster-0
+ Warning evict pod; ConditionStatus:True; PodName:mssql-ag-cluster-0 7m38s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:mssql-ag-cluster-0
+ Warning check pod running; ConditionStatus:False; PodName:mssql-ag-cluster-0 7m33s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:mssql-ag-cluster-0
+ Warning check pod running; ConditionStatus:True; PodName:mssql-ag-cluster-0 7m3s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:mssql-ag-cluster-0
+ Warning get pod; ConditionStatus:True; PodName:mssql-ag-cluster-1 6m58s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:mssql-ag-cluster-1
+ Warning evict pod; ConditionStatus:True; PodName:mssql-ag-cluster-1 6m58s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:mssql-ag-cluster-1
+ Warning check pod running; ConditionStatus:False; PodName:mssql-ag-cluster-1 6m53s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:mssql-ag-cluster-1
+ Warning check pod running; ConditionStatus:True; PodName:mssql-ag-cluster-1 6m23s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:mssql-ag-cluster-1
+ Warning get pod; ConditionStatus:True; PodName:mssql-ag-cluster-2 6m18s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Warning evict pod; ConditionStatus:True; PodName:mssql-ag-cluster-2 6m18s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Warning check pod running; ConditionStatus:False; PodName:mssql-ag-cluster-2 6m13s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:mssql-ag-cluster-2
+ Warning check pod running; ConditionStatus:True; PodName:mssql-ag-cluster-2 5m43s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:mssql-ag-cluster-2
+ Normal RestartPods 5m38s KubeDB Ops-manager Operator Successfully Restarted Pods With Resources
+ Normal Starting 5m38s KubeDB Ops-manager Operator Resuming MSSQLServer database: demo/mssql-ag-cluster
+ Normal Successful 5m38s KubeDB Ops-manager Operator Successfully resumed MSSQLServer database: demo/mssql-ag-cluster for MSSQLServerOpsRequest: mops-vscale-ag-cluster
+```
+
+Now, we are going to verify whether the resources of the mssqlserver instance has updated to meet up the desired state, Let's check,
+
+```bash
+$ kubectl get pod -n demo mssql-ag-cluster-0 -o json | jq '.spec.containers[0].resources'
+{
+ "limits": {
+ "cpu": "2",
+ "memory": "4Gi"
+ },
+ "requests": {
+ "cpu": "700m",
+ "memory": "1825361100800m"
+ }
+}
+$ kubectl get pod -n demo mssql-ag-cluster-1 -o json | jq '.spec.containers[0].resources'
+{
+ "limits": {
+ "cpu": "2",
+ "memory": "4Gi"
+ },
+ "requests": {
+ "cpu": "700m",
+ "memory": "1825361100800m"
+ }
+}
+$ kubectl get pod -n demo mssql-ag-cluster-2 -o json | jq '.spec.containers[0].resources'
+{
+ "limits": {
+ "cpu": "2",
+ "memory": "4Gi"
+ },
+ "requests": {
+ "cpu": "700m",
+ "memory": "1825361100800m"
+ }
+}
+```
+
+The above output verifies that we have successfully scaled up the resources of the MSSQLServer.
+
+## Cleaning Up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+```bash
+kubectl delete mssqlserver -n demo mssql-ag-cluster
+kubectl delete mssqlserveropsrequest -n demo mops-vscale-ag-cluster
+kubectl delete issuer -n demo mssqlserver-ca-issuer
+kubectl delete secret -n demo mssqlserver-ca
+kubectl delete ns demo
+```
+
+
+
diff --git a/docs/guides/mssqlserver/scaling/vertical-scaling/overview.md b/docs/guides/mssqlserver/scaling/vertical-scaling/overview.md
new file mode 100644
index 0000000000..92635094d9
--- /dev/null
+++ b/docs/guides/mssqlserver/scaling/vertical-scaling/overview.md
@@ -0,0 +1,54 @@
+---
+title: Microsoft SQL Server Vertical Scaling Overview
+menu:
+ docs_{{ .version }}:
+ identifier: ms-scaling-vertical-overview
+ name: Overview
+ parent: ms-scaling-vertical
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Vertical Scaling MSSQLServer
+
+This guide will give you an overview of how KubeDB Ops Manager updates the resources(for example Memory, CPU etc.) of the `MSSQLServer`.
+
+## Before You Begin
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [MSSQLServer](/docs/guides/mssqlserver/concepts/mssqlserver.md)
+ - [MSSQLServerOpsRequest](/docs/guides/mssqlserver/concepts/opsrequest.md)
+
+## How Vertical Scaling Process Works
+
+The following diagram shows how the `KubeDB` Ops Manager used to update the resources of the `MSSQLServer`. Open the image in a new tab to see the enlarged version.
+
+
+
+The vertical scaling process consists of the following steps:
+
+1. At first, a user creates a `MSSQLServer` CR.
+
+2. `KubeDB` provisioner operator watches for the `MSSQLServer` CR.
+
+3. When the operator finds a `MSSQLServer` CR, it creates a `PetSet` and related necessary stuff like secret, service, etc.
+
+4. Then, in order to update the resources(for example `CPU`, `Memory` etc.) of the `MSSQLServer` cluster the user creates a `MSSQLServerOpsRequest` CR with desired information.
+
+5. `KubeDB` Ops Manager watches for `MSSQLServerOpsRequest`.
+
+6. When it finds one, it halts the `MSSQLServer` object so that the `KubeDB` provisioner operator doesn't perform any operation on the `MSSQLServer` during the scaling process.
+
+7. Then the KubeDB Ops-manager operator will update resources of the PetSet's Pods to reach desired state.
+
+8. After successful updating of the resources of the PetSet's Pods, the `KubeDB` Ops Manager updates the `MSSQLServer` object resources to reflect the updated state.
+
+9. After successful updating of the `MSSQLServer` resources, the `KubeDB` Ops Manager resumes the `MSSQLServer` object so that the `KubeDB` Provisioner operator resumes its usual operations.
+
+In the next doc, we are going to show a step-by-step guide on updating resources of MSSQLServer database using vertical scaling operation.
diff --git a/docs/guides/mssqlserver/scaling/vertical-scaling/standalone.md b/docs/guides/mssqlserver/scaling/vertical-scaling/standalone.md
new file mode 100644
index 0000000000..fd48803af2
--- /dev/null
+++ b/docs/guides/mssqlserver/scaling/vertical-scaling/standalone.md
@@ -0,0 +1,361 @@
+---
+title: Vertical Scaling MSSQLServer
+menu:
+ docs_{{ .version }}:
+ identifier: ms-scaling-vertical-standalone
+ name: Standalone
+ parent: ms-scaling-vertical
+ weight: 20
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Vertical Scale MSSQLServer Instances
+
+This guide will show you how to use `kubeDB-Ops-Manager` to update the resources of a MSSQLServer instances.
+
+## Before You Begin
+
+- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). Make sure install with helm command including `--set global.featureGates.MSSQLServer=true` to ensure MSSQLServer CRD installation.
+
+- To configure TLS/SSL in `MSSQLServer`, `KubeDB` uses `cert-manager` to issue certificates. So first you have to make sure that the cluster has `cert-manager` installed. To install `cert-manager` in your cluster following steps [here](https://cert-manager.io/docs/installation/kubernetes/).
+
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [MSSQLServer](/docs/guides/mssqlserver/concepts/mssqlserver.md)
+ - [MSSQLServerOpsRequest](/docs/guides/mssqlserver/concepts/opsrequest.md)
+ - [Vertical Scaling Overview](/docs/guides/mssqlserver/scaling/vertical-scaling/overview.md)
+
+To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial.
+
+```bash
+$ kubectl create ns demo
+namespace/demo created
+```
+
+> **Note:** YAML files used in this tutorial are stored in [docs/examples/mssqlserver/scaling/vertical-scaling](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/mssqlserver/scaling/vertical-scaling) directory of [kubedb/doc](https://github.com/kubedb/docs) repository.
+
+### Apply Vertical Scaling on MSSQLServer
+
+Here, we are going to deploy a `MSSQLServer` instance using a supported version by `KubeDB` operator. Then we are going to apply vertical scaling on it.
+
+**Find supported MSSQLServer Version:**
+
+When you have installed `KubeDB`, it has created `MSSQLServerVersion` CR for all supported `MSSQLServer` versions. Let's check the supported MSSQLServer versions,
+
+```bash
+$ kubectl get mssqlserverversion
+NAME VERSION DB_IMAGE DEPRECATED AGE
+2022-cu12 2022 mcr.microsoft.com/mssql/server:2022-CU12-ubuntu-22.04 3d21h
+2022-cu14 2022 mcr.microsoft.com/mssql/server:2022-CU14-ubuntu-22.04 3d21h
+```
+
+The version above that does not show `DEPRECATED` `true` is supported by `KubeDB` for `MSSQLServer`. You can use any non-deprecated version. Here, we are going to create a mssqlserver using non-deprecated `MSSQLServer` version `2022-cu12`.
+
+
+At first, we need to create an Issuer/ClusterIssuer which will be used to generate the certificate used for TLS configurations.
+
+#### Create Issuer/ClusterIssuer
+
+Now, we are going to create an example `Issuer` that will be used throughout the duration of this tutorial. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. By following the below steps, we are going to create our desired issuer,
+
+- Start off by generating our ca-certificates using openssl,
+```bash
+openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=MSSQLServer/O=kubedb"
+```
+-
+- Create a secret using the certificate files we have just generated,
+```bash
+$ kubectl create secret tls mssqlserver-ca --cert=ca.crt --key=ca.key --namespace=demo
+secret/mssqlserver-ca created
+```
+Now, we are going to create an `Issuer` using the `mssqlserver-ca` secret that contains the ca-certificate we have just created. Below is the YAML of the `Issuer` CR that we are going to create,
+
+```yaml
+apiVersion: cert-manager.io/v1
+kind: Issuer
+metadata:
+ name: mssqlserver-ca-issuer
+ namespace: demo
+spec:
+ ca:
+ secretName: mssqlserver-ca
+```
+
+**Deploy MSSQLServer:**
+
+In this section, we are going to deploy a MSSQLServer instance. Then, in the next section, we will update the resources of the database server using vertical scaling.
+Below is the YAML of the `MSSQLServer` CR that we are going to create,
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: MSSQLServer
+metadata:
+ name: mssql-standalone
+ namespace: demo
+spec:
+ version: "2022-cu12"
+ replicas: 1
+ storageType: Durable
+ tls:
+ issuerRef:
+ name: mssqlserver-ca-issuer
+ kind: Issuer
+ apiGroup: "cert-manager.io"
+ clientTLS: false
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ deletionPolicy: WipeOut
+```
+
+Let's create the `MSSQLServer` CR we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mssqlserver/scaling/vertical-scaling/mssql-standalone.yaml
+mssqlserver.kubedb.com/mssql-standalone created
+```
+
+
+**Check mssqlserver Ready to Scale:**
+
+`KubeDB` watches for `MSSQLServer` objects using Kubernetes API. When a `MSSQLServer` object is created, `KubeDB` will create a new PetSet, Services, and Secrets, etc.
+Now, watch `MSSQLServer` is going to be in `Running` state and also watch `PetSet` and its pod is created and going to be in `Running` state,
+
+
+
+
+
+```bash
+$ watch kubectl get ms,petset,pods -n demo
+Every 2.0s: kubectl get ms,petset,pods -n demo
+
+NAME VERSION STATUS AGE
+mssqlserver.kubedb.com/mssql-standalone 2022-cu12 Ready 4m7s
+
+NAME AGE
+petset.apps.k8s.appscode.com/mssql-standalone 3m33s
+
+NAME READY STATUS RESTARTS AGE
+pod/mssql-standalone-0 1/1 Running 0 3m33s
+```
+
+Let's check the `mssql-standalone-0` pod's `mssql` container's resources, `mssql` container is the first container So it's index will be 0.
+
+```bash
+$ kubectl get pod -n demo mssql-standalone-0 -o json | jq '.spec.containers[0].resources'
+{
+ "limits": {
+ "memory": "4Gi"
+ },
+ "requests": {
+ "cpu": "500m",
+ "memory": "4Gi"
+ }
+}
+```
+
+Now, We are ready to apply a vertical scale on this mssqlserver database.
+
+#### Vertical Scaling
+
+Here, we are going to update the resources of the mssqlserver to meet up with the desired resources after scaling.
+
+**Create MSSQLServerOpsRequest:**
+
+In order to update the resources of your database, you have to create a `MSSQLServerOpsRequest` CR with your desired resources for scaling. Below is the YAML of the `MSSQLServerOpsRequest` CR that we are going to create,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: MSSQLServerOpsRequest
+metadata:
+ name: mops-vscale-standalone
+ namespace: demo
+spec:
+ type: VerticalScaling
+ databaseRef:
+ name: mssql-standalone
+ verticalScaling:
+ mssqlserver:
+ resources:
+ requests:
+ memory: "5Gi"
+ cpu: "1000m"
+ limits:
+ memory: "5Gi"
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing operation on `mssql-standalone` database.
+- `spec.type` specifies that we are performing `VerticalScaling` on our database.
+- `spec.VerticalScaling.mssqlserver` specifies the expected `mssql` container resources after scaling.
+
+Let's create the `MSSQLServerOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mssqlserver/scaling/vertical-scaling/mops-vscale-standalone.yaml
+mssqlserveropsrequest.ops.kubedb.com/mops-vscale-standalone created
+```
+
+**Verify MSSQLServer resources updated successfully:**
+
+If everything goes well, `KubeDB-Ops-Manager` will update the resources of the PetSet's `Pod` containers. After a successful scaling process is done, the `KubeDB-Ops-Manager` updates the resources of the `MSSQLServer` object.
+
+First, we will wait for `MSSQLServerOpsRequest` to be successful. Run the following command to watch `MSSQLServerOpsRequest` CR,
+
+```bash
+$ watch kubectl get mssqlserveropsrequest -n demo mops-vscale-standalone
+Every 2.0s: kubectl get mssqlserveropsrequest -n demo mops-vscale-standalone
+
+NAME TYPE STATUS AGE
+mops-vscale-standalone VerticalScaling Successful 3m22s
+```
+
+We can see from the above output that the `MSSQLServerOpsRequest` has succeeded. If we describe the `MSSQLServerOpsRequest`, we will see that the mssqlserver resources are updated.
+
+```bash
+$ kubectl describe mssqlserveropsrequest -n demo mops-vscale-standalone
+Name: mops-vscale-standalone
+Namespace: demo
+Labels:
+Annotations:
+API Version: ops.kubedb.com/v1alpha1
+Kind: MSSQLServerOpsRequest
+Metadata:
+ Creation Timestamp: 2024-10-24T13:43:57Z
+ Generation: 1
+ Resource Version: 744508
+ UID: 68bcc122-2ad7-4ae0-ab72-1a3e01fd6f40
+Spec:
+ Apply: IfReady
+ Database Ref:
+ Name: mssql-standalone
+ Type: VerticalScaling
+ Vertical Scaling:
+ Mssqlserver:
+ Resources:
+ Limits:
+ Cpu: 2
+ Memory: 5Gi
+ Requests:
+ Cpu: 1
+ Memory: 5Gi
+Status:
+ Conditions:
+ Last Transition Time: 2024-10-24T13:43:57Z
+ Message: MSSQLServer ops-request has started to vertically scaling the MSSQLServer nodes
+ Observed Generation: 1
+ Reason: VerticalScaling
+ Status: True
+ Type: VerticalScaling
+ Last Transition Time: 2024-10-24T13:44:24Z
+ Message: Successfully paused database
+ Observed Generation: 1
+ Reason: DatabasePauseSucceeded
+ Status: True
+ Type: DatabasePauseSucceeded
+ Last Transition Time: 2024-10-24T13:44:24Z
+ Message: Successfully updated PetSets Resources
+ Observed Generation: 1
+ Reason: UpdatePetSets
+ Status: True
+ Type: UpdatePetSets
+ Last Transition Time: 2024-10-24T13:44:29Z
+ Message: get pod; ConditionStatus:True; PodName:mssql-standalone-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--mssql-standalone-0
+ Last Transition Time: 2024-10-24T13:44:29Z
+ Message: evict pod; ConditionStatus:True; PodName:mssql-standalone-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--mssql-standalone-0
+ Last Transition Time: 2024-10-24T13:45:04Z
+ Message: check pod running; ConditionStatus:True; PodName:mssql-standalone-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodRunning--mssql-standalone-0
+ Last Transition Time: 2024-10-24T13:45:09Z
+ Message: Successfully Restarted Pods With Resources
+ Observed Generation: 1
+ Reason: RestartPods
+ Status: True
+ Type: RestartPods
+ Last Transition Time: 2024-10-24T13:45:09Z
+ Message: Successfully completed the VerticalScaling for MSSQLServer
+ Observed Generation: 1
+ Reason: Successful
+ Status: True
+ Type: Successful
+ Observed Generation: 1
+ Phase: Successful
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Starting 3m55s KubeDB Ops-manager Operator Start processing for MSSQLServerOpsRequest: demo/mops-vscale-standalone
+ Normal Starting 3m55s KubeDB Ops-manager Operator Pausing MSSQLServer database: demo/mssql-standalone
+ Normal Successful 3m55s KubeDB Ops-manager Operator Successfully paused MSSQLServer database: demo/mssql-standalone for MSSQLServerOpsRequest: mops-vscale-standalone
+ Normal UpdatePetSets 3m28s KubeDB Ops-manager Operator Successfully updated PetSets Resources
+ Warning get pod; ConditionStatus:True; PodName:mssql-standalone-0 3m23s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:mssql-standalone-0
+ Warning evict pod; ConditionStatus:True; PodName:mssql-standalone-0 3m23s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:mssql-standalone-0
+ Warning check pod running; ConditionStatus:False; PodName:mssql-standalone-0 3m18s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:mssql-standalone-0
+ Warning check pod running; ConditionStatus:True; PodName:mssql-standalone-0 2m48s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:mssql-standalone-0
+ Normal RestartPods 2m43s KubeDB Ops-manager Operator Successfully Restarted Pods With Resources
+ Normal Starting 2m43s KubeDB Ops-manager Operator Resuming MSSQLServer database: demo/mssql-standalone
+ Normal Successful 2m43s KubeDB Ops-manager Operator Successfully resumed MSSQLServer database: demo/mssql-standalone for MSSQLServerOpsRequest: mops-vscale-standalone
+```
+
+Now, we are going to verify whether the resources of the mssqlserver instance has updated to meet up the desired state, Let's check,
+
+```bash
+$ kubectl get pod -n demo mssql-standalone-0 -o json | jq '.spec.containers[0].resources'
+{
+ "limits": {
+ "memory": "5Gi"
+ },
+ "requests": {
+ "cpu": "1",
+ "memory": "5Gi"
+ }
+}
+```
+
+The above output verifies that we have successfully scaled up the resources of the MSSQLServer.
+
+## Cleaning Up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+```bash
+kubectl delete mssqlserver -n demo mssql-standalone
+kubectl delete mssqlserveropsrequest -n demo mops-vscale-standalone
+kubectl delete issuer -n demo mssqlserver-ca-issuer
+kubectl delete secret -n demo mssqlserver-ca
+kubectl delete ns demo
+```
+
+
+## Next Steps
+
+- Detail concepts of [MSSQLServer object](/docs/guides/mssqlserver/concepts/mssqlserver.md).
+- [Backup and Restore](/docs/guides/mssqlserver/backup/overview/index.md) MSSQLServer databases using KubeStash.
+
+
diff --git a/docs/guides/mssqlserver/tls/ag_cluster.md b/docs/guides/mssqlserver/tls/ag_cluster.md
index f419b5ef49..394e42dbac 100644
--- a/docs/guides/mssqlserver/tls/ag_cluster.md
+++ b/docs/guides/mssqlserver/tls/ag_cluster.md
@@ -118,18 +118,21 @@ spec:
databases:
- agdb1
- agdb2
- internalAuth:
- endpointCert:
- issuerRef:
- apiGroup: cert-manager.io
- name: mssqlserver-ca-issuer
- kind: Issuer
tls:
issuerRef:
name: mssqlserver-ca-issuer
kind: Issuer
apiGroup: "cert-manager.io"
clientTLS: true
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
storageType: Durable
storage:
storageClassName: "standard"
@@ -204,6 +207,7 @@ Ng1DaJSNjZkgXXFX
```bash
$ kubectl exec -it -n demo mssql-ag-tls-0 -c mssql -- bash
+mssql@mssql-ag-tls-0:/$ /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P Ng1DaJSNjZkgXXFX -N
1> select name from sys.databases
2> go
name
diff --git a/docs/guides/mssqlserver/tls/overview.md b/docs/guides/mssqlserver/tls/overview.md
index d0e020060e..8518c9b4d6 100644
--- a/docs/guides/mssqlserver/tls/overview.md
+++ b/docs/guides/mssqlserver/tls/overview.md
@@ -42,7 +42,7 @@ Read about the fields in details from [MSSQLServer Concepts](/docs/guides/mssqls
The following figure shows how `KubeDB` used to configure TLS/SSL in MSSQLServer. Open the image in a new tab to see the enlarged version.
diff --git a/docs/guides/mssqlserver/tls/standalone.md b/docs/guides/mssqlserver/tls/standalone.md
index 234364d9ec..536c15f37e 100644
--- a/docs/guides/mssqlserver/tls/standalone.md
+++ b/docs/guides/mssqlserver/tls/standalone.md
@@ -120,6 +120,15 @@ spec:
kind: Issuer
apiGroup: "cert-manager.io"
clientTLS: true
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
storage:
storageClassName: "standard"
accessModes:
diff --git a/docs/guides/mssqlserver/volume-expansion/_index.md b/docs/guides/mssqlserver/volume-expansion/_index.md
new file mode 100644
index 0000000000..899484bfd1
--- /dev/null
+++ b/docs/guides/mssqlserver/volume-expansion/_index.md
@@ -0,0 +1,10 @@
+---
+title: Volume Expansion
+menu:
+ docs_{{ .version }}:
+ identifier: ms-volume-expansion
+ name: Volume Expansion
+ parent: guides-mssqlserver
+ weight: 42
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/mssqlserver/volume-expansion/overview.md b/docs/guides/mssqlserver/volume-expansion/overview.md
new file mode 100644
index 0000000000..7663e353c9
--- /dev/null
+++ b/docs/guides/mssqlserver/volume-expansion/overview.md
@@ -0,0 +1,57 @@
+---
+title: MSSQLServer Volume Expansion Overview
+menu:
+ docs_{{ .version }}:
+ identifier: ms-volume-expansion-overview
+ name: Overview
+ parent: ms-volume-expansion
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# MSSQLServer Volume Expansion
+
+This guide will give an overview on how KubeDB Ops Manager expand the volume of `MSSQLServer`.
+
+## Before You Begin
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [MSSQLServer](/docs/guides/mssqlserver/concepts/mssqlserver.md)
+ - [MSSQLServerOpsRequest](/docs/guides/mssqlserver/concepts/opsrequest.md)
+
+## How Volume Expansion Process Works
+
+The following diagram shows how KubeDB Ops Manager expand the volumes of `MSSQLServer` database components. Open the image in a new tab to see the enlarged version.
+
+
+
+The Volume Expansion process consists of the following steps:
+
+1. At first, a user creates a `MSSQLServer` Custom Resource (CR).
+
+2. `KubeDB` Provisioner operator watches the `MSSQLServer` CR.
+
+3. When the operator finds a `MSSQLServer` CR, it creates required `PetSet` and related necessary stuff like secrets, services, etc.
+
+4. The petSet creates Persistent Volumes according to the Volume Claim Template provided in the petset configuration. This Persistent Volume will be expanded by the `KubeDB` Ops-manager operator.
+
+5. Then, in order to expand the volume of the `MSSQLServer` database the user creates a `MSSQLServerOpsRequest` CR with desired information.
+
+6. `KubeDB` Ops-manager operator watches the `MSSQLServerOpsRequest` CR.
+
+7. When it finds a `MSSQLServerOpsRequest` CR, it pauses the `MSSQLServer` object which is referred from the `MSSQLServerOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `MSSQLServer` object during the volume expansion process.
+
+8. Then the `KubeDB` Ops-manager operator will expand the persistent volume to reach the expected size defined in the `MSSQLServerOpsRequest` CR.
+
+9. After the successful expansion of the volume of the related PetSet Pods, the `KubeDB` Ops-manager operator updates the new volume size in the `MSSQLServer` object to reflect the updated state.
+
+10. After the successful Volume Expansion of the `MSSQLServer`, the `KubeDB` Ops-manager operator resumes the `MSSQLServer` object so that the `KubeDB` Provisioner operator resumes its usual operations.
+
+In the next docs, we are going to show a step-by-step guide on Volume Expansion of various MSSQLServer database using `MSSQLServerOpsRequest` CRD.
+
diff --git a/docs/guides/mssqlserver/volume-expansion/volume-expansion.md b/docs/guides/mssqlserver/volume-expansion/volume-expansion.md
new file mode 100644
index 0000000000..4bf7d105e2
--- /dev/null
+++ b/docs/guides/mssqlserver/volume-expansion/volume-expansion.md
@@ -0,0 +1,272 @@
+---
+title: MSSQLServer Volume Expansion
+menu:
+ docs_{{ .version }}:
+ identifier: mssqlserver-volume-expansion-guide
+ name: MSSQLServer Volume Expansion
+ parent: ms-volume-expansion
+ weight: 20
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# MSSQLServer Volume Expansion
+
+This guide will show you how to use `KubeDB` Ops-manager operator to expand the volume of a MSSQLServer.
+
+## Before You Begin
+
+- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). Make sure install with helm command including `--set global.featureGates.MSSQLServer=true` to ensure MSSQLServer CRD installation.
+
+- To configure TLS/SSL in `MSSQLServer`, `KubeDB` uses `cert-manager` to issue certificates. So first you have to make sure that the cluster has `cert-manager` installed. To install `cert-manager` in your cluster following steps [here](https://cert-manager.io/docs/installation/kubernetes/).
+
+- You must have a `StorageClass` that supports volume expansion.
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [MSSQLServer](/docs/guides/mssqlserver/concepts/mssqlserver.md)
+ - [MSSQLServerOpsRequest](/docs/guides/mssqlserver/concepts/opsrequest.md)
+ - [Volume Expansion Overview](/docs/guides/mssqlserver/volume-expansion/overview.md)
+
+To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial.
+
+```bash
+$ kubectl create ns demo
+namespace/demo created
+```
+
+## Expand Volume of MSSQLServer
+
+Here, we are going to deploy a `MSSQLServer` cluster using a supported version by `KubeDB` operator. Then we are going to apply `MSSQLServerOpsRequest` to expand its volume. The process of expanding MSSQLServer `standalone` is same as MSSQLServer Availability Group cluster.
+
+### Prepare MSSQLServer
+
+At first verify that your cluster has a storage class, that supports volume expansion. Let's check,
+
+```bash
+$ kubectl get storageclass
+NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
+local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 2d
+longhorn (default) driver.longhorn.io Delete Immediate true 3m25s
+longhorn-static driver.longhorn.io Delete Immediate true 3m19s
+```
+
+We can see from the output that `longhorn (default)` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We will use this storage class.
+
+
+Now, we are going to deploy a `MSSQLServer` in `AvailabilityGroup` Mode with version `2022-cu12`.
+
+### Deploy MSSQLServer
+
+First, an issuer needs to be created, even if TLS is not enabled for SQL Server. The issuer will be used to configure the TLS-enabled Wal-G proxy server, which is required for the SQL Server backup and restore operations.
+
+### Create Issuer/ClusterIssuer
+
+Now, we are going to create an example `Issuer` that will be used throughout the duration of this tutorial. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. By following the below steps, we are going to create our desired issuer,
+
+- Start off by generating our ca-certificates using openssl,
+```bash
+openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=MSSQLServer/O=kubedb"
+```
+- Create a secret using the certificate files we have just generated,
+```bash
+$ kubectl create secret tls mssqlserver-ca --cert=ca.crt --key=ca.key --namespace=demo
+secret/mssqlserver-ca created
+```
+Now, we are going to create an `Issuer` using the `mssqlserver-ca` secret that contains the ca-certificate we have just created. Below is the YAML of the `Issuer` CR that we are going to create,
+
+```yaml
+apiVersion: cert-manager.io/v1
+kind: Issuer
+metadata:
+ name: mssqlserver-ca-issuer
+ namespace: demo
+spec:
+ ca:
+ secretName: mssqlserver-ca
+```
+
+Let’s create the `Issuer` CR we have shown above,
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mssqlserver/ag-cluster/mssqlserver-ca-issuer.yaml
+issuer.cert-manager.io/mssqlserver-ca-issuer created
+```
+
+In this section, we are going to deploy a MSSQLServer Cluster with 1GB volume. Then, in the next section we will expand its volume to 2GB using `MSSQLServerOpsRequest` CRD. Below is the YAML of the `MSSQLServer` CR that we are going to create,
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: MSSQLServer
+metadata:
+ name: mssqlserver-ag-cluster
+ namespace: demo
+spec:
+ version: "2022-cu12"
+ replicas: 3
+ topology:
+ mode: AvailabilityGroup
+ availabilityGroup:
+ databases:
+ - agdb1
+ - agdb2
+ tls:
+ issuerRef:
+ name: mssqlserver-ca-issuer
+ kind: Issuer
+ apiGroup: "cert-manager.io"
+ clientTLS: false
+ podTemplate:
+ spec:
+ containers:
+ - name: mssql
+ env:
+ - name: ACCEPT_EULA
+ value: "Y"
+ - name: MSSQL_PID
+ value: Evaluation # Change it
+ storageType: Durable
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ deletionPolicy: WipeOut
+```
+
+Let's create the `MSSQLServer` CR we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mssqlserver/volume-expansion/mssqlserver-ag-cluster.yaml
+mssqlserver.kubedb.com/mssqlserver-ag-cluster created
+```
+
+Now, wait until `mssqlserver-ag-cluster` has status `Ready`. i.e,
+
+```bash
+$ kubectl get mssqlserver -n demo mssqlserver-ag-cluster
+NAME VERSION STATUS AGE
+mssqlserver-ag-cluster 2022-cu12 Ready 5m1s
+```
+
+Let's check volume size from petset, and from the persistent volume,
+
+```bash
+$ kubectl get petset -n demo mssqlserver-ag-cluster -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
+"1Gi"
+
+$ kubectl get pv -n demo
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
+pvc-059f186a-01a4-441d-85f1-95aef34934be 1Gi RWO Delete Bound demo/data-mssqlserver-ag-cluster-0 longhorn 82s
+pvc-87bea35f-4a55-4aa5-903a-e4da9f548241 1Gi RWO Delete Bound demo/data-mssqlserver-ag-cluster-1 longhorn 52s
+pvc-9d1c3c9c-f928-4fa2-a2e1-becf2ab9c564 1Gi RWO Delete Bound demo/data-mssqlserver-ag-cluster-2 longhorn 35s
+```
+
+You can see the petset has 1GB storage, and the capacity of all the persistent volumes are also 1GB.
+
+We are now ready to apply the `MSSQLServerOpsRequest` CR to expand the volume of this database.
+
+### Volume Expansion
+
+Here, we are going to expand the volume of the MSSQLServer cluster.
+
+#### Create MSSQLServerOpsRequest
+
+In order to expand the volume of the database, we have to create a `MSSQLServerOpsRequest` CR with our desired volume size. Below is the YAML of the `MSSQLServerOpsRequest` CR that we are going to create,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: MSSQLServerOpsRequest
+metadata:
+ name: mops-volume-exp-ag-cluster
+ namespace: demo
+spec:
+ type: VolumeExpansion
+ databaseRef:
+ name: mssqlserver-ag-cluster
+ volumeExpansion:
+ mode: "Offline" # Online
+ mssqlserver: 2Gi
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing volume expansion operation on `mssqlserver-ag-cluster` database.
+- `spec.type` specifies that we are performing `VolumeExpansion` on our database.
+- `spec.volumeExpansion.mssqlserver` specifies the desired volume size.
+- `spec.volumeExpansion.mode` specifies the desired volume expansion mode (`Online` or `Offline`). Storageclass `longhorn` supports `Offline` volume expansion.
+
+> **Note:** If the Storageclass you are using support `Online` Volume Expansion, Try Online volume expansion by using `spec.volumeExpansion.mode:"Online"`.
+
+During `Online` VolumeExpansion KubeDB expands volume without deleting the pods, it directly updates the underlying PVC. And for Offline volume expansion, the database is paused. The Pods are deleted and PVC is updated. Then the database Pods are recreated with updated PVC.
+
+
+Let's create the `MSSQLServerOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/example/mssqlserver/volume-expansion/mops-volume-exp-ag-cluster.yaml
+mssqlserveropsrequest.ops.kubedb.com/mops-volume-exp-ag-cluster created
+```
+
+#### Verify MSSQLServer volume expanded successfully
+
+If everything goes well, `KubeDB` Ops-manager operator will update the volume size of `MSSQLServer` object and related `PetSet` and `Persistent Volumes`.
+
+Let's wait for `MSSQLServerOpsRequest` to be `Successful`. Run the following command to watch `MSSQLServerOpsRequest` CR,
+
+```bash
+$ kubectl get mssqlserveropsrequest -n demo
+NAME TYPE STATUS AGE
+mops-volume-exp-ag-cluster VolumeExpansion Successful 8m30s
+```
+
+We can see from the above output that the `MSSQLServerOpsRequest` has succeeded.
+
+Now, we are going to verify from the `Petset`, and the `Persistent Volumes` whether the volume of the database has expanded to meet the desired state, Let's check,
+
+```bash
+$ kubectl get petset -n demo mssqlserver-ag-cluster -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
+"2Gi"
+
+$ kubectl get pv -n demo
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
+pvc-059f186a-01a4-441d-85f1-95aef34934be 2Gi RWO Delete Bound demo/data-mssqlserver-ag-cluster-0 longhorn 29m
+pvc-87bea35f-4a55-4aa5-903a-e4da9f548241 2Gi RWO Delete Bound demo/data-mssqlserver-ag-cluster-1 longhorn 29m
+pvc-9d1c3c9c-f928-4fa2-a2e1-becf2ab9c564 2Gi RWO Delete Bound demo/data-mssqlserver-ag-cluster-2 longhorn 29m
+```
+
+The above output verifies that we have successfully expanded the volume of the MSSQLServer database.
+
+## Standalone Mode
+
+The volume expansion process is same for all the MSSQLServer modes. The `MSSQLServerOpsRequest` CR has the same fields. The database needs to refer to a mssqlserver
+in standalone mode.
+
+## Cleaning Up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+
+```bash
+$ kubectl patch -n demo ms/mssqlserver-ag-cluster -p '{"spec":{"deletionPolicy":"WipeOut"}}' --type="merge"
+mssqlserver.kubedb.com/mssqlserver-ag-cluster patched
+
+$ kubectl delete -n demo mssqlserver mssqlserver-ag-cluster
+mssqlserver.kubedb.com "mssqlserver-ag-cluster" deleted
+
+$ kubectl delete -n demo mssqlserveropsrequest mops-volume-exp-ag-cluster
+mssqlserveropsrequest.ops.kubedb.com "mops-volume-exp-ag-cluster" deleted
+
+kubectl delete issuer -n demo mssqlserver-ca-issuer
+kubectl delete secret -n demo mssqlserver-ca
+kubectl delete ns demo
+```
+
+## Next Steps
+
+- Detail concepts of [MSSQLServer object](/docs/guides/mssqlserver/concepts/mssqlserver.md).
+- [Backup and Restore](/docs/guides/mssqlserver/backup/overview/index.md) MSSQLServer databases using KubeStash.
\ No newline at end of file
diff --git a/docs/guides/mysql/README.md b/docs/guides/mysql/README.md
index ad11191d13..0d7cac726e 100644
--- a/docs/guides/mysql/README.md
+++ b/docs/guides/mysql/README.md
@@ -17,24 +17,24 @@ aliases:
## Supported MySQL Features
-| Features | Availability |
-| --------------------------------------------------------------------------------------- | :----------: |
-| Group Replication | ✓ |
-| Innodb Cluster | ✓ |
-| SemiSynchronous cluster | ✓ |
-| Read Replicas | ✓ |
-| TLS: Add, Remove, Update, Rotate ( [Cert Manager](https://cert-manager.io/docs/) ) | ✓ |
-| Automated Version update | ✓ |
-| Automatic Vertical Scaling | ✓ |
-| Automated Horizontal Scaling | ✓ |
-| Automated Volume Expansion | ✓ |
-| Backup/Recovery: Instant, Scheduled ( [Stash](https://stash.run/) ) | ✓ |
-| Initialize using Snapshot | ✓ |
-| Initialize using Script (\*.sql, \*sql.gz and/or \*.sh) | ✓ |
-| Custom Configuration | ✓ |
-| Using Custom docker image | ✓ |
-| Builtin Prometheus Discovery | ✓ |
-| Using Prometheus operator | ✓ |
+| Features | Availability |
+|------------------------------------------------------------------------------------|:------------:|
+| Group Replication | ✓ |
+| Innodb Cluster | ✓ |
+| SemiSynchronous cluster | ✓ |
+| Read Replicas | ✓ |
+| TLS: Add, Remove, Update, Rotate ( [Cert Manager](https://cert-manager.io/docs/) ) | ✓ |
+| Automated Version update | ✓ |
+| Automatic Vertical Scaling | ✓ |
+| Automated Horizontal Scaling | ✓ |
+| Automated Volume Expansion | ✓ |
+| Backup/Recovery: Instant, Scheduled ( [Stash](https://stash.run/) ) | ✓ |
+| Initialize using Snapshot | ✓ |
+| Initialize using Script (\*.sql, \*sql.gz and/or \*.sh) | ✓ |
+| Custom Configuration | ✓ |
+| Using Custom docker image | ✓ |
+| Builtin Prometheus Discovery | ✓ |
+| Using Prometheus operator | ✓ |
## Life Cycle of a MySQL Object
diff --git a/docs/guides/mysql/backup/kubestash/application-level/index.md b/docs/guides/mysql/backup/kubestash/application-level/index.md
index 3ba6030722..afe3c4a98d 100644
--- a/docs/guides/mysql/backup/kubestash/application-level/index.md
+++ b/docs/guides/mysql/backup/kubestash/application-level/index.md
@@ -3,7 +3,7 @@ title: Application Level Backup & Restore MySQL | KubeStash
description: Application Level Backup and Restore using KubeStash
menu:
docs_{{ .version }}:
- identifier: guides-application-level-backup-stashv2
+ identifier: guides-mysql-application-level-backup-stashv2
name: Application Level Backup
parent: guides-mysql-backup-stashv2
weight: 40
diff --git a/docs/guides/mysql/pitr/archiver.md b/docs/guides/mysql/pitr/archiver.md
index 00ad0fae62..4be52657b9 100644
--- a/docs/guides/mysql/pitr/archiver.md
+++ b/docs/guides/mysql/pitr/archiver.md
@@ -143,13 +143,13 @@ spec:
scheduler:
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
- schedule: "/30 * * * *"
+ schedule: "*/30 * * * *"
sessionHistoryLimit: 2
manifestBackup:
scheduler:
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
- schedule: "/30 * * * *"
+ schedule: "*/30 * * * *"
sessionHistoryLimit: 2
backupStorage:
ref:
diff --git a/docs/guides/mysql/pitr/yamls/mysqlarchiver.yaml b/docs/guides/mysql/pitr/yamls/mysqlarchiver.yaml
index f3818455e6..4fe652feed 100644
--- a/docs/guides/mysql/pitr/yamls/mysqlarchiver.yaml
+++ b/docs/guides/mysql/pitr/yamls/mysqlarchiver.yaml
@@ -28,13 +28,13 @@ spec:
scheduler:
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
- schedule: "/30 * * * *"
+ schedule: "*/30 * * * *"
sessionHistoryLimit: 2
manifestBackup:
scheduler:
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
- schedule: "/30 * * * *"
+ schedule: "*/30 * * * *"
sessionHistoryLimit: 2
backupStorage:
ref:
diff --git a/docs/guides/percona-xtradb/README.md b/docs/guides/percona-xtradb/README.md
index 26ca2b23f5..7064337785 100644
--- a/docs/guides/percona-xtradb/README.md
+++ b/docs/guides/percona-xtradb/README.md
@@ -18,17 +18,17 @@ aliases:
## Supported PerconaXtraDB Features
-| Features | Availability |
-| ------------------------------------------------------- | :----------: |
-| Clustering | ✓ |
-| Persistent Volume | ✓ |
-| Instant Backup | ✓ |
-| Scheduled Backup | ✓ |
-| Initialize using Snapshot | ✓ |
-| Custom Configuration | ✓ |
-| Using Custom docker image | ✓ |
-| Builtin Prometheus Discovery | ✓ |
-| Using Prometheus operator | ✓ |
+| Features | Availability |
+|------------------------------|:------------:|
+| Clustering | ✓ |
+| Persistent Volume | ✓ |
+| Instant Backup | ✓ |
+| Scheduled Backup | ✓ |
+| Initialize using Snapshot | ✓ |
+| Custom Configuration | ✓ |
+| Using Custom docker image | ✓ |
+| Builtin Prometheus Discovery | ✓ |
+| Using Prometheus operator | ✓ |
## Life Cycle of a PerconaXtraDB Object
diff --git a/docs/guides/pgpool/README.md b/docs/guides/pgpool/README.md
index 0d703af274..a309c343e1 100644
--- a/docs/guides/pgpool/README.md
+++ b/docs/guides/pgpool/README.md
@@ -24,7 +24,7 @@ KubeDB operator now comes bundled with Pgpool crd to manage all the essential fe
## Supported Pgpool Features
| Features | Availability |
-|-------------------------------------------------------------| :----------: |
+|-------------------------------------------------------------|:------------:|
| Clustering | ✓ |
| Multiple Pgpool Versions | ✓ |
| Custom Configuration | ✓ |
diff --git a/docs/guides/postgres/README.md b/docs/guides/postgres/README.md
index 7cc8130805..cf088bedc1 100644
--- a/docs/guides/postgres/README.md
+++ b/docs/guides/postgres/README.md
@@ -18,7 +18,7 @@ aliases:
## Supported PostgreSQL Features
| Features | Availability |
-| ---------------------------------- |:------------:|
+|------------------------------------|:------------:|
| Clustering | ✓ |
| Warm Standby | ✓ |
| Hot Standby | ✓ |
diff --git a/docs/guides/postgres/backup/kubestash/application-level/index.md b/docs/guides/postgres/backup/kubestash/application-level/index.md
index 6cc1d97d44..e1a85772c7 100644
--- a/docs/guides/postgres/backup/kubestash/application-level/index.md
+++ b/docs/guides/postgres/backup/kubestash/application-level/index.md
@@ -3,7 +3,7 @@ title: Application Level Backup & Restore PostgreSQL | KubeStash
description: Application Level Backup and Restore using KubeStash
menu:
docs_{{ .version }}:
- identifier: guides-application-level-backup-stashv2
+ identifier: guides-pg-application-level-backup-stashv2
name: Application Level Backup
parent: guides-pg-backup-stashv2
weight: 40
diff --git a/docs/guides/proxysql/README.md b/docs/guides/proxysql/README.md
index 2f98e0db82..d554d46342 100644
--- a/docs/guides/proxysql/README.md
+++ b/docs/guides/proxysql/README.md
@@ -18,7 +18,7 @@ aliases:
## Supported ProxySQL Features
| Features | Availability |
-| ------------------------------------ | :----------: |
+|--------------------------------------|:------------:|
| Load balance MySQL Group Replication | ✓ |
| Load balance PerconaXtraDB Cluster | ✓ |
| Custom Configuration | ✓ |
diff --git a/docs/guides/proxysql/quickstart/mysqlgrp/examples/sample-proxysql-v1.yaml b/docs/guides/proxysql/quickstart/mysqlgrp/examples/sample-proxysql-v1.yaml
index 7d4e68f246..c3f58173c7 100644
--- a/docs/guides/proxysql/quickstart/mysqlgrp/examples/sample-proxysql-v1.yaml
+++ b/docs/guides/proxysql/quickstart/mysqlgrp/examples/sample-proxysql-v1.yaml
@@ -5,7 +5,26 @@ metadata:
namespace: demo
spec:
version: "2.3.2-debian"
- replicas: 1
+ replicas: 3
+ podTemplate:
+ spec:
+ containers:
+ - name: proxysql
+ resources:
+ limits:
+ cpu: 500m
+ memory: 128Mi
+ requests:
+ cpu: 250m
+ memory: 64Mi
+ securityContext:
+ runAsGroup: 999
+ runAsNonRoot: true
+ runAsUser: 999
+ seccompProfile:
+ type: RuntimeDefault
+ podPlacementPolicy:
+ name: default
syncUsers: true
backend:
name: mysql-server
diff --git a/docs/guides/redis/README.md b/docs/guides/redis/README.md
index 1d465374a6..37c0c255f9 100644
--- a/docs/guides/redis/README.md
+++ b/docs/guides/redis/README.md
@@ -16,25 +16,25 @@ aliases:
> New to KubeDB? Please start [here](/docs/README.md).
## Supported Redis Features
-| Features | Community | Enterprise |
-|------------------------------------------------------------------------------------|:---------:|:----------:|
-| Clustering | ✓ | ✓ |
-| Sentinel | ✓ | ✓ |
-| Standalone | ✓ | ✓ |
-| Authentication & Autorization | ✓ | ✓ |
-| Persistent Volume | ✓ | ✓ |
-| Initializing from Snapshot ( [Stash](https://stash.run/) ) | ✓ | ✓ |
-| Instant Backup (Sentinel and Standalone Mode) | ✓ | ✓ |
-| Scheduled Backup (Sentinel and Standalone Mode) | ✓ | ✓ |
-| Builtin Prometheus Discovery | ✓ | ✓ |
-| Using Prometheus operator | ✓ | ✓ |
-| Automated Version Update | ✗ | ✓ |
-| Automatic Vertical Scaling | ✗ | ✓ |
-| Automated Horizontal Scaling | ✗ | ✓ |
-| Automated db-configure Reconfiguration | ✗ | ✓ |
-| TLS: Add, Remove, Update, Rotate ( [Cert Manager](https://cert-manager.io/docs/) ) | ✗ | ✓ |
-| Automated Volume Expansion | ✗ | ✓ |
-| Autoscaling (vertically) | ✗ | ✓ |
+| Features | Availability |
+|------------------------------------------------------------------------------------|:------------:|
+| Clustering | ✓ |
+| Sentinel | ✓ |
+| Standalone | ✓ |
+| Authentication & Autorization | ✓ |
+| Persistent Volume | ✓ |
+| Initializing from Snapshot ( [Stash](https://stash.run/) ) | ✓ |
+| Instant Backup (Sentinel and Standalone Mode) | ✓ |
+| Scheduled Backup (Sentinel and Standalone Mode) | ✓ |
+| Builtin Prometheus Discovery | ✓ |
+| Using Prometheus operator | ✓ |
+| Automated Version Update | ✗ |
+| Automatic Vertical Scaling | ✗ |
+| Automated Horizontal Scaling | ✗ |
+| Automated db-configure Reconfiguration | ✗ |
+| TLS: Add, Remove, Update, Rotate ( [Cert Manager](https://cert-manager.io/docs/) ) | ✗ |
+| Automated Volume Expansion | ✗ |
+| Autoscaling (vertically) | ✗ |
## Life Cycle of a Redis Object
diff --git a/docs/guides/redis/backup/kubestash/application-level/index.md b/docs/guides/redis/backup/kubestash/application-level/index.md
index 7c07ad1eaf..15c0cdd0b2 100644
--- a/docs/guides/redis/backup/kubestash/application-level/index.md
+++ b/docs/guides/redis/backup/kubestash/application-level/index.md
@@ -3,7 +3,7 @@ title: Application Level Backup & Restore Redis | KubeStash
description: Application Level Backup and Restore using KubeStash
menu:
docs_{{ .version }}:
- identifier: guides-application-level-backup-stashv2
+ identifier: guides-rd-application-level-backup-stashv2
name: Application Level Backup
parent: guides-rd-backup-stashv2
weight: 40
diff --git a/docs/guides/redis/reconfigure-tls/standalone.md b/docs/guides/redis/reconfigure-tls/standalone.md
index d67247f2a3..590165ee46 100644
--- a/docs/guides/redis/reconfigure-tls/standalone.md
+++ b/docs/guides/redis/reconfigure-tls/standalone.md
@@ -94,7 +94,7 @@ root@rd-sample-0:/data#
We can verify from the above output that TLS is disabled for this database.
-### Create Issuer/ StandaloneIssuer
+### Create Issuer/ClusterIssuer
Now, We are going to create an example `Issuer` that will be used to enable SSL/TLS in Redis. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`.
diff --git a/docs/guides/singlestore/README.md b/docs/guides/singlestore/README.md
index b5cfacad89..c1f29a33dd 100644
--- a/docs/guides/singlestore/README.md
+++ b/docs/guides/singlestore/README.md
@@ -42,7 +42,8 @@ SingleStore, a distributed SQL database for real-time analytics, transactional w
KubeDB supports the following SingleSore Versions.
- `8.1.32`
-- `8.5.7`
+- `8.5.30`
+- `8.7.10`
## Life Cycle of a SingleStore Object
diff --git a/docs/guides/singlestore/_index.md b/docs/guides/singlestore/_index.md
index b8bca4c8b4..5548c5e483 100644
--- a/docs/guides/singlestore/_index.md
+++ b/docs/guides/singlestore/_index.md
@@ -5,6 +5,6 @@ menu:
identifier: guides-singlestore
name: SingleStore
parent: guides
- weight: 10
+ weight: 20
menu_name: docs_{{ .version }}
---
diff --git a/docs/guides/singlestore/autoscaler/_index.md b/docs/guides/singlestore/autoscaler/_index.md
new file mode 100644
index 0000000000..890618438e
--- /dev/null
+++ b/docs/guides/singlestore/autoscaler/_index.md
@@ -0,0 +1,10 @@
+---
+title: Autoscaling
+menu:
+ docs_{{ .version }}:
+ identifier: sdb-auto-scaling
+ name: Autoscaling
+ parent: guides-singlestore
+ weight: 46
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/singlestore/autoscaler/compute/_index.md b/docs/guides/singlestore/autoscaler/compute/_index.md
new file mode 100644
index 0000000000..13d55d0e5e
--- /dev/null
+++ b/docs/guides/singlestore/autoscaler/compute/_index.md
@@ -0,0 +1,10 @@
+---
+title: Compute Autoscaling
+menu:
+ docs_{{ .version }}:
+ identifier: sdb-compute-auto-scaling
+ name: Compute Autoscaling
+ parent: sdb-auto-scaling
+ weight: 46
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/singlestore/autoscaler/compute/cluster.md b/docs/guides/singlestore/autoscaler/compute/cluster.md
new file mode 100644
index 0000000000..d9e2b6bb39
--- /dev/null
+++ b/docs/guides/singlestore/autoscaler/compute/cluster.md
@@ -0,0 +1,538 @@
+---
+title: SingleStore Compute Autoscaling Overview
+menu:
+ docs_{{ .version }}:
+ identifier: sdb-auto-scaling-cluster
+ name: SingleStore Compute
+ parent: sdb-compute-auto-scaling
+ weight: 20
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+# Autoscaling the Compute Resource of a SingleStore Cluster
+
+This guide will show you how to use `KubeDB` to autoscale compute resources i.e. cpu and memory of a singlestore cluster for aggregator and leaf nodes.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster.
+
+- Install `KubeDB` Provisioner, Ops-manager and Autoscaler operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation)
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [SingleStore](/docs/guides/singlestore/concepts/singlestore.md)
+ - [SingleStoreAutoscaler](/docs/guides/singlestore/concepts/autoscaler.md)
+ - [SingleStoreOpsRequest](/docs/guides/singlestore/concepts/opsrequest.md)
+ - [Compute Resource Autoscaling Overview](/docs/guides/singlestore/autoscaler/compute/overview.md)
+
+To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial.
+
+```bash
+$ kubectl create ns demo
+namespace/demo created
+```
+
+> **Note:** YAML files used in this tutorial are stored in [docs/examples/singlestore](/docs/examples/singlestore) directory of [kubedb/docs](https://github.com/kubedb/docs) repository.
+
+## Autoscaling of SingleStore Cluster
+
+Here, we are going to deploy a `SingleStore` Cluster using a supported version by `KubeDB` operator. Then we are going to apply `SingleStoreAutoscaler` to set up autoscaling.
+
+#### Create SingleStore License Secret
+
+We need SingleStore License to create SingleStore Database. So, Ensure that you have acquired a license and then simply pass the license by secret.
+
+```bash
+$ kubectl create secret generic -n demo license-secret \
+ --from-literal=username=license \
+ --from-literal=password='your-license-set-here'
+secret/license-secret created
+```
+
+#### Deploy SingleStore Cluster
+
+In this section, we are going to deploy a SingleStore with version `8.7.10`. Then, in the next section we will set up autoscaling for this database using `SingleStoreAutoscaler` CRD. Below is the YAML of the `SingleStore` CR that we are going to create,
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sdb-sample
+ namespace: demo
+spec:
+ version: 8.7.10
+ topology:
+ aggregator:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "0.7"
+ requests:
+ memory: "2Gi"
+ cpu: "0.7"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 3
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "0.7"
+ requests:
+ memory: "2Gi"
+ cpu: "0.7"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
+
+```
+Let's create the `SingleStore` CRO we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/singlestore/autoscaling/compute/sdb-cluster.yaml
+singlestore.kubedb.com/sdb-cluster created
+```
+
+Now, wait until `sdb-sample` has status `Ready`. i.e,
+
+```bash
+NAME TYPE VERSION STATUS AGE
+singlestore.kubedb.com/sdb-sample kubedb.com/v1alpha2 8.7.10 Ready 4m35s
+```
+
+Let's check the aggregator pod containers resources,
+
+```bash
+kubectl get pod -n demo sdb-sample-aggregator-0 -o json | jq '.spec.containers[].resources'
+{
+ "limits": {
+ "cpu": "700m",
+ "memory": "2Gi"
+ },
+ "requests": {
+ "cpu": "700m",
+ "memory": "2Gi"
+ }
+}
+```
+
+Let's check the SingleStore aggregator node resources,
+```bash
+kubectl get singlestore -n demo sdb-sample -o json | jq '.spec.topology.aggregator.podTemplate.spec.containers[] | select(.name == "singlestore") | .resources'
+{
+ "limits": {
+ "cpu": "700m",
+ "memory": "2Gi"
+ },
+ "requests": {
+ "cpu": "700m",
+ "memory": "2Gi"
+ }
+}
+
+```
+
+You can see from the above outputs that the resources are same as the one we have assigned while deploying the singlestore.
+
+We are now ready to apply the `SingleStoreAutoscaler` CRO to set up autoscaling for this database.
+
+### Compute Resource Autoscaling
+
+Here, we are going to set up compute resource autoscaling using a SingleStoreAutoscaler Object.
+
+#### Create SingleStoreAutoscaler Object
+
+In order to set up compute resource autoscaling for this singlestore cluster, we have to create a `SingleStoreAutoscaler` CRO with our desired configuration. Below is the YAML of the `SingleStoreAutoscaler` object that we are going to create,
+
+```yaml
+apiVersion: autoscaling.kubedb.com/v1alpha1
+kind: SinglestoreAutoscaler
+metadata:
+ name: sdb-cluster-autoscaler
+ namespace: demo
+spec:
+ databaseRef:
+ name: sdb-sample
+ compute:
+ aggregator:
+ trigger: "On"
+ podLifeTimeThreshold: 5m
+ minAllowed:
+ cpu: 900m
+ memory: 3Gi
+ maxAllowed:
+ cpu: 2000m
+ memory: 6Gi
+ controlledResources: ["cpu", "memory"]
+ containerControlledValues: "RequestsAndLimits"
+ resourceDiffPercentage: 10
+```
+
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing compute resource scaling operation on `sdb-sample` cluster.
+- `spec.compute.aggregator.trigger` or `spec.compute.leaf.trigger` specifies that compute autoscaling is enabled for this cluster.
+- `spec.compute.aggregator.podLifeTimeThreshold` or `spec.compute.leaf.podLifeTimeThreshold` specifies the minimum lifetime for at least one of the pod to initiate a vertical scaling.
+- `spec.compute.aggregator.resourceDiffPercentage` or `spec.compute.leaf.resourceDiffPercentage` specifies the minimum resource difference in percentage. The default is 10%.If the difference between current & recommended resource is less than ResourceDiffPercentage, Autoscaler Operator will ignore the updating.
+- `spec.compute.aggregator.minAllowed` or `spec.compute.leaf.minAllowed` specifies the minimum allowed resources for the cluster.
+- `spec.compute.aggregator.maxAllowed` or `spec.compute.leaf.maxAllowed` specifies the maximum allowed resources for the cluster.
+- `spec.compute.aggregator.controlledResources` or `spec.compute.leaf.controlledResources` specifies the resources that are controlled by the autoscaler.
+- `spec.compute.aggregator.containerControlledValues` or `spec.compute.leaf.containerControlledValues` specifies which resource values should be controlled. The default is "RequestsAndLimits".
+- `spec.opsRequestOptions` contains the options to pass to the created OpsRequest. It has 2 fields.
+ - `timeout` specifies the timeout for the OpsRequest.
+ - `apply` specifies when the OpsRequest should be applied. The default is "IfReady".
+
+Let's create the `SinglestoreAutoscaler` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/singlestore/autoscaler/compute/sdb-cluster-autoscaler.yaml
+singlestoreautoscaler.autoscaling.kubedb.com/sdb-cluster-autoscaler created
+```
+
+#### Verify Autoscaling is set up successfully
+
+Let's check that the `singlestoreautoscaler` resource is created successfully,
+
+```bash
+$ kubectl describe singlestoreautoscaler -n demo sdb-cluster-autoscaler
+Name: sdb-cluster-autoscaler
+Namespace: demo
+Labels:
+Annotations:
+API Version: autoscaling.kubedb.com/v1alpha1
+Kind: SinglestoreAutoscaler
+Metadata:
+ Creation Timestamp: 2024-09-10T08:55:26Z
+ Generation: 1
+ Owner References:
+ API Version: kubedb.com/v1alpha2
+ Block Owner Deletion: true
+ Controller: true
+ Kind: Singlestore
+ Name: sdb-sample
+ UID: f81d0592-9dda-428a-b0b4-e72ab3643e22
+ Resource Version: 424275
+ UID: 6b7b3d72-b92f-4e6f-88eb-4e891c24c550
+Spec:
+ Compute:
+ Aggregator:
+ Container Controlled Values: RequestsAndLimits
+ Controlled Resources:
+ cpu
+ memory
+ Max Allowed:
+ Cpu: 2
+ Memory: 6Gi
+ Min Allowed:
+ Cpu: 900m
+ Memory: 3Gi
+ Pod Life Time Threshold: 5m0s
+ Resource Diff Percentage: 10
+ Trigger: On
+ Database Ref:
+ Name: sdb-sample
+ Ops Request Options:
+ Apply: IfReady
+Status:
+ Checkpoints:
+ Cpu Histogram:
+ Bucket Weights:
+ Index: 0
+ Weight: 2455
+ Index: 1
+ Weight: 2089
+ Index: 2
+ Weight: 10000
+ Index: 3
+ Weight: 361
+ Reference Timestamp: 2024-09-10T09:05:00Z
+ Total Weight: 5.5790751974302655
+ First Sample Start: 2024-09-10T08:59:26Z
+ Last Sample Start: 2024-09-10T09:15:18Z
+ Last Update Time: 2024-09-10T09:15:27Z
+ Memory Histogram:
+ Bucket Weights:
+ Index: 1
+ Weight: 1821
+ Index: 2
+ Weight: 10000
+ Reference Timestamp: 2024-09-10T09:05:00Z
+ Total Weight: 14.365194626381038
+ Ref:
+ Container Name: singlestore-coordinator
+ Vpa Object Name: sdb-sample-aggregator
+ Total Samples Count: 32
+ Version: v3
+ Cpu Histogram:
+ Bucket Weights:
+ Index: 5
+ Weight: 3770
+ Index: 6
+ Weight: 10000
+ Index: 7
+ Weight: 132
+ Index: 20
+ Weight: 118
+ Reference Timestamp: 2024-09-10T09:05:00Z
+ Total Weight: 6.533759718059768
+ First Sample Start: 2024-09-10T08:59:26Z
+ Last Sample Start: 2024-09-10T09:16:19Z
+ Last Update Time: 2024-09-10T09:16:28Z
+ Memory Histogram:
+ Bucket Weights:
+ Index: 17
+ Weight: 8376
+ Index: 18
+ Weight: 10000
+ Reference Timestamp: 2024-09-10T09:05:00Z
+ Total Weight: 17.827743425726553
+ Ref:
+ Container Name: singlestore
+ Vpa Object Name: sdb-sample-aggregator
+ Total Samples Count: 34
+ Version: v3
+ Conditions:
+ Last Transition Time: 2024-09-10T08:59:43Z
+ Message: Successfully created SinglestoreOpsRequest demo/sdbops-sdb-sample-aggregator-c0u141
+ Observed Generation: 1
+ Reason: CreateOpsRequest
+ Status: True
+ Type: CreateOpsRequest
+ Vpas:
+ Conditions:
+ Last Transition Time: 2024-09-10T08:59:42Z
+ Status: True
+ Type: RecommendationProvided
+ Recommendation:
+ Container Recommendations:
+ Container Name: singlestore
+ Lower Bound:
+ Cpu: 900m
+ Memory: 3Gi
+ Target:
+ Cpu: 900m
+ Memory: 3Gi
+ Uncapped Target:
+ Cpu: 100m
+ Memory: 351198544
+ Upper Bound:
+ Cpu: 2
+ Memory: 6Gi
+ Vpa Name: sdb-sample-aggregator
+Events:
+```
+So, the `singlestoreautoscaler` resource is created successfully.
+
+you can see in the `Status.VPAs.Recommendation` section, that recommendation has been generated for our database. Our autoscaler operator continuously watches the recommendation generated and creates an `singlestoreopsrequest` based on the recommendations, if the database pods resources are needed to scaled up or down.
+
+Let's watch the `singlestoreopsrequest` in the demo namespace to see if any `singlestoreopsrequest` object is created. After some time you'll see that a `singlestoreopsrequest` will be created based on the recommendation.
+
+```bash
+$ watch kubectl get singlestoreopsrequest -n demo
+Every 2.0s: kubectl get singlestoreopsrequest -n demo
+NAME TYPE STATUS AGE
+sdbops-sdb-sample-aggregator-c0u141 VerticalScaling Progressing 10s
+```
+
+Let's wait for the ops request to become successful.
+
+```bash
+$ kubectl get singlestoreopsrequest -n demo
+NAME TYPE STATUS AGE
+sdbops-sdb-sample-aggregator-c0u141 VerticalScaling Successful 3m2s
+```
+
+We can see from the above output that the `SinglestoreOpsRequest` has succeeded. If we describe the `SinglestoreOpsRequest` we will get an overview of the steps that were followed to scale the cluster.
+
+```bash
+$ kubectl describe singlestoreopsrequest -n demo sdbops-sdb-sample-aggregator-c0u141
+Name: sdbops-sdb-sample-aggregator-c0u141
+Namespace: demo
+Labels: app.kubernetes.io/component=database
+ app.kubernetes.io/instance=sdb-sample
+ app.kubernetes.io/managed-by=kubedb.com
+ app.kubernetes.io/name=singlestores.kubedb.com
+Annotations:
+API Version: ops.kubedb.com/v1alpha1
+Kind: SinglestoreOpsRequest
+Metadata:
+ Creation Timestamp: 2024-09-10T08:59:43Z
+ Generation: 1
+ Owner References:
+ API Version: autoscaling.kubedb.com/v1alpha1
+ Block Owner Deletion: true
+ Controller: true
+ Kind: SinglestoreAutoscaler
+ Name: sdb-cluster-autoscaler
+ UID: 6b7b3d72-b92f-4e6f-88eb-4e891c24c550
+ Resource Version: 406111
+ UID: 978a1a00-f217-4326-b103-f66bbccf2943
+Spec:
+ Apply: IfReady
+ Database Ref:
+ Name: sdb-sample
+ Type: VerticalScaling
+ Vertical Scaling:
+ Aggregator:
+ Resources:
+ Limits:
+ Cpu: 900m
+ Memory: 3Gi
+ Requests:
+ Cpu: 900m
+ Memory: 3Gi
+Status:
+ Conditions:
+ Last Transition Time: 2024-09-10T09:01:55Z
+ Message: Timeout: request did not complete within requested timeout - context deadline exceeded
+ Observed Generation: 1
+ Reason: Failed
+ Status: True
+ Type: VerticalScaling
+ Last Transition Time: 2024-09-10T08:59:46Z
+ Message: Successfully paused database
+ Observed Generation: 1
+ Reason: DatabasePauseSucceeded
+ Status: True
+ Type: DatabasePauseSucceeded
+ Last Transition Time: 2024-09-10T08:59:46Z
+ Message: Successfully updated PetSets Resources
+ Observed Generation: 1
+ Reason: UpdatePetSets
+ Status: True
+ Type: UpdatePetSets
+ Last Transition Time: 2024-09-10T09:01:21Z
+ Message: Successfully Restarted Pods With Resources
+ Observed Generation: 1
+ Reason: RestartPods
+ Status: True
+ Type: RestartPods
+ Last Transition Time: 2024-09-10T08:59:52Z
+ Message: get pod; ConditionStatus:True; PodName:sdb-sample-aggregator-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--sdb-sample-aggregator-0
+ Last Transition Time: 2024-09-10T08:59:52Z
+ Message: evict pod; ConditionStatus:True; PodName:sdb-sample-aggregator-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--sdb-sample-aggregator-0
+ Last Transition Time: 2024-09-10T09:00:31Z
+ Message: check pod ready; ConditionStatus:True; PodName:sdb-sample-aggregator-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodReady--sdb-sample-aggregator-0
+ Last Transition Time: 2024-09-10T09:00:36Z
+ Message: get pod; ConditionStatus:True; PodName:sdb-sample-aggregator-1
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--sdb-sample-aggregator-1
+ Last Transition Time: 2024-09-10T09:00:36Z
+ Message: evict pod; ConditionStatus:True; PodName:sdb-sample-aggregator-1
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--sdb-sample-aggregator-1
+ Last Transition Time: 2024-09-10T09:01:16Z
+ Message: check pod ready; ConditionStatus:True; PodName:sdb-sample-aggregator-1
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodReady--sdb-sample-aggregator-1
+ Observed Generation: 1
+ Phase: Successful
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Starting 25m KubeDB Ops-manager Operator Start processing for SinglestoreOpsRequest: demo/sdbops-sdb-sample-aggregator-c0u141
+ Normal Starting 25m KubeDB Ops-manager Operator Pausing Singlestore database: demo/sdb-sample
+ Normal Successful 25m KubeDB Ops-manager Operator Successfully paused Singlestore database: demo/sdb-sample for SinglestoreOpsRequest: sdbops-sdb-sample-aggregator-c0u141
+ Normal UpdatePetSets 25m KubeDB Ops-manager Operator Successfully updated PetSets Resources
+ Warning get pod; ConditionStatus:True; PodName:sdb-sample-aggregator-0 25m KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:sdb-sample-aggregator-0
+ Warning evict pod; ConditionStatus:True; PodName:sdb-sample-aggregator-0 25m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:sdb-sample-aggregator-0
+ Warning check pod ready; ConditionStatus:False; PodName:sdb-sample-aggregator-0 25m KubeDB Ops-manager Operator check pod ready; ConditionStatus:False; PodName:sdb-sample-aggregator-0
+ Warning check pod ready; ConditionStatus:True; PodName:sdb-sample-aggregator-0 24m KubeDB Ops-manager Operator check pod ready; ConditionStatus:True; PodName:sdb-sample-aggregator-0
+ Warning get pod; ConditionStatus:True; PodName:sdb-sample-aggregator-1 24m KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:sdb-sample-aggregator-1
+ Warning evict pod; ConditionStatus:True; PodName:sdb-sample-aggregator-1 24m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:sdb-sample-aggregator-1
+ Warning check pod ready; ConditionStatus:False; PodName:sdb-sample-aggregator-1 24m KubeDB Ops-manager Operator check pod ready; ConditionStatus:False; PodName:sdb-sample-aggregator-1
+ Warning check pod ready; ConditionStatus:True; PodName:sdb-sample-aggregator-1 24m KubeDB Ops-manager Operator check pod ready; ConditionStatus:True; PodName:sdb-sample-aggregator-1
+ Normal RestartPods 24m KubeDB Ops-manager Operator Successfully Restarted Pods With Resources
+ Normal Starting
+ Normal Successful
+```
+
+Now, we are going to verify from the Pod, and the singlestore yaml whether the resources of the topology database has updated to meet up the desired state, Let's check,
+
+```bash
+kubectl get pod -n demo sdb-sample-aggregator-0 -o json | jq '.spec.containers[].resources'
+{
+ "limits": {
+ "cpu": "900m",
+ "memory": "3Gi"
+ },
+ "requests": {
+ "cpu": "900m",
+ "memory": "3Gi"
+ }
+}
+
+
+kubectl get singlestore -n demo sdb-sample -o json | jq '.spec.topology.aggregator.podTemplate.spec.containers[] | select(.name == "singlestore") | .resources'
+{
+ "limits": {
+ "cpu": "900m",
+ "memory": "3Gi"
+ },
+ "requests": {
+ "cpu": "900m",
+ "memory": "3Gi"
+ }
+}
+
+```
+
+
+The above output verifies that we have successfully auto scaled the resources of the SingleStore cluster.
+
+## Cleaning Up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+```bash
+kubectl delete singlestoreopsrequest -n demo sdbops-sdb-sample-aggregator-c0u141
+kubectl delete singlestoreautoscaler -n demo sdb-cluster-autoscaler
+kubectl delete kf -n demo sdb-sample
+kubectl delete ns demo
+```
+## Next Steps
+
+- Detail concepts of [SingleStore object](/docs/guides/singlestore/concepts/singlestore.md).
+- Different SingleStore clustering modes [here](/docs/guides/singlestore/clustering/_index.md).
+- Monitor your singlestore database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/singlestore/monitoring/prometheus-operator/index.md).
+- Monitor your singlestore database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/singlestore/monitoring/builtin-prometheus/index.md).
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
+
diff --git a/docs/guides/singlestore/autoscaler/compute/overview.md b/docs/guides/singlestore/autoscaler/compute/overview.md
new file mode 100644
index 0000000000..11b9a28f63
--- /dev/null
+++ b/docs/guides/singlestore/autoscaler/compute/overview.md
@@ -0,0 +1,55 @@
+---
+title: SingleStore Compute Autoscaling Overview
+menu:
+ docs_{{ .version }}:
+ identifier: sdb-auto-scaling-overview
+ name: Overview
+ parent: sdb-compute-auto-scaling
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# SingleStore Compute Resource Autoscaling
+
+This guide will give an overview on how KubeDB Autoscaler operator autoscales the database compute resources i.e. cpu and memory using `singlestoreautoscaler` crd.
+
+## Before You Begin
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [SingleStore](/docs/guides/singlestore/concepts/singlestore.md)
+ - [SingleStoreAutoscaler](/docs/guides/singlestore/concepts/autoscaler.md)
+ - [SingleStoreOpsRequest](/docs/guides/singlestore/concepts/opsrequest.md)
+
+## How Compute Autoscaling Works
+
+The following diagram shows how KubeDB Autoscaler operator autoscales the resources of `SingleStore` database components. Open the image in a new tab to see the enlarged version.
+
+
+
+The Auto Scaling process consists of the following steps:
+
+1. At first, a user creates a `SingleStore` Custom Resource Object (CRO).
+
+2. `KubeDB` Provisioner operator watches the `SingleStore` CRO.
+
+3. When the operator finds a `SingleStore` CRO, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc.
+
+4. Then, in order to set up autoscaling of the various components (ie. Aggregator, Leaf, Standalone) of the `SingleStore` database the user creates a `SingleStoreAutoscaler` CRO with desired configuration.
+
+5. `KubeDB` Autoscaler operator watches the `SingleStoreAutoscaler` CRO.
+
+6. `KubeDB` Autoscaler operator generates recommendation using the modified version of kubernetes [official recommender](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler/pkg/recommender) for different components of the database, as specified in the `SingleStoreAutoscaler` CRO.
+
+7. If the generated recommendation doesn't match the current resources of the database, then `KubeDB` Autoscaler operator creates a `SingleStoreOpsRequest` CRO to scale the database to match the recommendation generated.
+
+8. `KubeDB` Ops-manager operator watches the `SingleStoreOpsRequest` CRO.
+
+9. Then the `KubeDB` Ops-manager operator will scale the database component vertically as specified on the `SingleStoreOpsRequest` CRO.
+
+In the next docs, we are going to show a step by step guide on Autoscaling of various SingleStore database components using `SingleStoreAutoscaler` CRD.
\ No newline at end of file
diff --git a/docs/guides/singlestore/autoscaler/storage/_index.md b/docs/guides/singlestore/autoscaler/storage/_index.md
new file mode 100644
index 0000000000..91b59b6898
--- /dev/null
+++ b/docs/guides/singlestore/autoscaler/storage/_index.md
@@ -0,0 +1,10 @@
+---
+title: Storage Autoscaling
+menu:
+ docs_{{ .version }}:
+ identifier: sdb-storage-auto-scaling
+ name: Storage Autoscaling
+ parent: sdb-auto-scaling
+ weight: 46
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/singlestore/autoscaler/storage/cluster.md b/docs/guides/singlestore/autoscaler/storage/cluster.md
new file mode 100644
index 0000000000..318406e8d6
--- /dev/null
+++ b/docs/guides/singlestore/autoscaler/storage/cluster.md
@@ -0,0 +1,476 @@
+---
+title: SingleStore Cluster Autoscaling
+menu:
+ docs_{{ .version }}:
+ identifier: sdb-storage-auto-scaling-cluster
+ name: SingleStore Storage
+ parent: sdb-storage-auto-scaling
+ weight: 20
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Storage Autoscaling of a SingleStore Cluster
+
+This guide will show you how to use `KubeDB` to autoscale the storage of a SingleStore cluster.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster.
+
+- Install `KubeDB` Provisioner, Ops-manager and Autoscaler operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation)
+
+- Install Prometheus from [here](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack)
+
+- You must have a `StorageClass` that supports volume expansion.
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [SingleStore](/docs/guides/singlestore/concepts/singlestore.md)
+ - [SingleStoreAutoscaler](/docs/guides/singlestore/concepts/autoscaler.md)
+ - [SingleStoreOpsRequest](/docs/guides/singlestore/concepts/opsrequest.md)
+ - [Storage Autoscaling Overview](/docs/guides/singlestore/autoscaler/storage/overview.md)
+
+To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial.
+
+```bash
+$ kubectl create ns demo
+namespace/demo created
+```
+
+> **Note:** YAML files used in this tutorial are stored in [docs/examples/singlestore](/docs/examples/singlestore) directory of [kubedb/docs](https://github.com/kubedb/docs) repository.
+
+## Storage Autoscaling of SingleStore Cluster
+
+At first verify that your cluster has a storage class, that supports volume expansion. Let's check,
+
+```bash
+$ kubectl get storageclass
+NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
+standard (default) kubernetes.io/gce-pd Delete Immediate true 2m49s
+```
+
+#### Create SingleStore License Secret
+
+We need SingleStore License to create SingleStore Database. So, Ensure that you have acquired a license and then simply pass the license by secret.
+
+```bash
+$ kubectl create secret generic -n demo license-secret \
+ --from-literal=username=license \
+ --from-literal=password='your-license-set-here'
+secret/license-secret created
+```
+
+#### Deploy SingleStore Cluster
+
+In this section, we are going to deploy a SingleStore with version `8.7.10`. Then, in the next section we will set up autoscaling for this database using `SingleStoreAutoscaler` CRD. Below is the YAML of the `SingleStore` CR that we are going to create,
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sdb-sample
+ namespace: demo
+spec:
+ version: 8.7.10
+ topology:
+ aggregator:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "0.7"
+ requests:
+ memory: "2Gi"
+ cpu: "0.7"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 3
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "0.7"
+ requests:
+ memory: "2Gi"
+ cpu: "0.7"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
+
+```
+Let's create the `SingleStore` CRO we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/singlestore/autoscaling/storage/sdb-cluster.yaml
+singlestore.kubedb.com/sdb-cluster created
+```
+
+Now, wait until `sdb-sample` has status `Ready`. i.e,
+
+```bash
+NAME TYPE VERSION STATUS AGE
+singlestore.kubedb.com/sdb-sample kubedb.com/v1alpha2 8.7.10 Ready 4m35s
+```
+
+> **Note:** You can manage storage autoscale for aggregator and leaf nodes separately. Here, we will focus on leaf nodes.
+
+Let's check volume size from petset, and from the persistent volume,
+
+```bash
+$ kubectl get petset -n demo sdb-sample-leaf -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
+"10Gi"
+
+$ kubectl get pv -n demo | grep 'leaf'
+pvc-5cf8638e365544dd 10Gi RWO Retain Bound demo/data-sdb-sample-leaf-0 linode-block-storage-retain 50s
+pvc-a99e7adb282a4f9c 10Gi RWO Retain Bound demo/data-sdb-sample-leaf-2 linode-block-storage-retain 60s
+pvc-da8e9e5162a748df 10Gi RWO Retain Bound demo/data-sdb-sample-leaf-1 linode-block-storage-retain 70s
+
+```
+
+You can see the petset of leaf has 10GB storage, and the capacity of all the persistent volume is also 10GB.
+
+We are now ready to apply the `SingleStoreAutoscaler` CRO to set up storage autoscaling for this cluster.
+
+### Storage Autoscaling
+
+Here, we are going to set up storage autoscaling using a SingleStoreAutoscaler Object.
+
+#### Create SingleStoreAutoscaler Object
+
+In order to set up vertical autoscaling for this singlestore cluster, we have to create a `SinglestoreAutoscaler` CRO with our desired configuration. Below is the YAML of the `SinglestoreAutoscaler` object that we are going to create,
+
+```yaml
+apiVersion: autoscaling.kubedb.com/v1alpha1
+kind: SinglestoreAutoscaler
+metadata:
+ name: sdb-cluster-autoscaler
+ namespace: demo
+spec:
+ databaseRef:
+ name: sdb-sample
+ storage:
+ leaf:
+ trigger: "On"
+ usageThreshold: 30
+ scalingThreshold: 50
+ expansionMode: "Online"
+ upperBound: "100Gi"
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `sdb-sample` cluster.
+- `spec.storage.leaf.trigger` specifies that storage autoscaling is enabled for leaf nodes on this cluster.
+- `spec.storage.leaf.usageThreshold` specifies storage usage threshold, if storage usage exceeds `30%` then storage autoscaling will be triggered.
+- `spec.storage.leaf.scalingThreshold` specifies the scaling threshold. Storage will be scaled to `50%` of the current amount.
+- It has another field `spec.storage.leaf.expansionMode` to set the opsRequest volumeExpansionMode, which support two values: `Online` & `Offline`. Default value is `Online`.
+
+Let's create the `SinglestoreAutoscaler` CR we have shown above,
+
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/singlestore/autoscaling/storage/sdb-storage-autoscaler.yaml
+singlestoreautoscaler.autoscaling.kubedb.com/sdb-storage-autoscaler created
+```
+
+#### Storage Autoscaling is set up successfully
+
+Let's check that the `singlestoreautoscaler` resource is created successfully,
+
+```bash
+$ kubectl get singlestoreautoscaler -n demo
+NAME AGE
+sdb-cluster-autoscaler 2m5s
+
+
+$ kubectl describe singlestoreautoscaler -n demo sdb-cluster-autoscaler
+Name: sdb-cluster-autoscaler
+Namespace: demo
+Labels:
+Annotations:
+API Version: autoscaling.kubedb.com/v1alpha1
+Kind: SinglestoreAutoscaler
+Metadata:
+ Creation Timestamp: 2024-09-11T07:05:11Z
+ Generation: 1
+ Owner References:
+ API Version: kubedb.com/v1alpha2
+ Block Owner Deletion: true
+ Controller: true
+ Kind: Singlestore
+ Name: sdb-sample
+ UID: e08e1f37-d869-437d-9b15-14c6aef3f406
+ Resource Version: 4904325
+ UID: 471afa65-6d12-4e7d-a2a6-6d28ce440c4d
+Spec:
+ Database Ref:
+ Name: sdb-sample
+ Ops Request Options:
+ Apply: IfReady
+ Storage:
+ Leaf:
+ Expansion Mode: Online
+ Scaling Rules:
+ Applies Upto:
+ Threshold: 50pc
+ Scaling Threshold: 50
+ Trigger: On
+ Upper Bound: 100Gi
+ Usage Threshold: 30
+Events:
+
+
+```
+
+So, the `singlestoreautoscaler` resource is created successfully.
+
+Now, for this demo, we are going to manually fill up the persistent volume to exceed the `usageThreshold` creating new database with partitions 6 to see if storage autoscaling is working or not.
+
+Let's exec into the cluster pod and fill the cluster volume using the following commands:
+
+```bash
+$ kubectl exec -it -n demo sdb-sample-leaf-0 -- bash
+Defaulted container "singlestore" out of: singlestore, singlestore-coordinator, singlestore-init (init)
+[memsql@sdb-sample-leaf-0 /]$ df -h var/lib/memsql
+Filesystem Size Used Avail Use% Mounted on
+/dev/disk/by-id/scsi-0Linode_Volume_pvcc50e0d73d07349f9 9.8G 1.4G 8.4G 15% /var/lib/memsql
+
+$ kubectl exec -it -n demo sdb-sample-aggregator-0 -- bash
+Defaulted container "singlestore" out of: singlestore, singlestore-coordinator, singlestore-init (init)
+[memsql@sdb-sample-aggregator-0 /]$ memsql -uroot -p$ROOT_PASSWORD
+singlestore-client: [Warning] Using a password on the command line interface can be insecure.
+Welcome to the MySQL monitor. Commands end with ; or \g.
+Your MySQL connection id is 113
+Server version: 5.7.32 SingleStoreDB source distribution (compatible; MySQL Enterprise & MySQL Commercial)
+Copyright (c) 2000, 2022, Oracle and/or its affiliates.
+Oracle is a registered trademark of Oracle Corporation and/or its
+affiliates. Other names may be trademarks of their respective
+owners.
+Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
+singlestore> create database demo partitions 6;
+Query OK, 1 row affected (3.78 sec)
+
+$ kubectl exec -it -n demo sdb-sample-leaf-0 -- bash
+Defaulted container "singlestore" out of: singlestore, singlestore-coordinator, singlestore-init (init)
+[memsql@sdb-sample-leaf-0 /]$ df -h var/lib/memsql
+Filesystem Size Used Avail Use% Mounted on
+/dev/disk/by-id/scsi-0Linode_Volume_pvcc50e0d73d07349f9 9.8G 3.2G 6.7G 33% /var/lib/memsql
+
+```
+
+So, from the above output we can see that the storage usage is 33%, which exceeded the `usageThreshold` 30%.
+
+Let's watch the `singlestoreopsrequest` in the demo namespace to see if any `singlestoreopsrequest` object is created. After some time you'll see that a `singlestoreopsrequest` of type `VolumeExpansion` will be created based on the `scalingThreshold`.
+
+```bash
+$ watch kubectl get singlestoreopsrequest -n demo
+Every 2.0s: kubectl get singlestoreopsrequest -n demo ashraful: Wed Sep 11 13:39:25 2024
+
+NAME TYPE STATUS AGE
+sdbops-sdb-sample-th2r62 VolumeExpansion Progressing 10s
+```
+
+Let's wait for the ops request to become successful.
+
+```bash
+$ watch kubectl get singlestoreopsrequest -n demo
+Every 2.0s: kubectl get singlestoreopsrequest -n demo ashraful: Wed Sep 11 13:41:12 2024
+
+NAME TYPE STATUS AGE
+sdbops-sdb-sample-th2r62 VolumeExpansion Successful 2m31s
+
+```
+
+We can see from the above output that the `SinglestoreOpsRequest` has succeeded. If we describe the `SinglestoreOpsRequest` we will get an overview of the steps that were followed to expand the volume of the cluster.
+
+```bash
+$ kubectl describe singlestoreopsrequest -n demo sdbops-sdb-sample-th2r62
+Name: sdbops-sdb-sample-th2r62
+Namespace: demo
+Labels: app.kubernetes.io/component=database
+ app.kubernetes.io/instance=sdb-sample
+ app.kubernetes.io/managed-by=kubedb.com
+ app.kubernetes.io/name=singlestores.kubedb.com
+Annotations:
+API Version: ops.kubedb.com/v1alpha1
+Kind: SinglestoreOpsRequest
+Metadata:
+ Creation Timestamp: 2024-09-11T07:36:42Z
+ Generation: 1
+ Owner References:
+ API Version: autoscaling.kubedb.com/v1alpha1
+ Block Owner Deletion: true
+ Controller: true
+ Kind: SinglestoreAutoscaler
+ Name: sdb-cluster-autoscaler
+ UID: 471afa65-6d12-4e7d-a2a6-6d28ce440c4d
+ Resource Version: 4909632
+ UID: 3dce68d0-b5ee-4ad6-bd1f-f712bae39630
+Spec:
+ Apply: IfReady
+ Database Ref:
+ Name: sdb-sample
+ Type: VolumeExpansion
+ Volume Expansion:
+ Leaf: 15696033792
+ Mode: Online
+Status:
+ Conditions:
+ Last Transition Time: 2024-09-11T07:36:42Z
+ Message: Singlestore ops-request has started to expand volume of singlestore nodes.
+ Observed Generation: 1
+ Reason: VolumeExpansion
+ Status: True
+ Type: VolumeExpansion
+ Last Transition Time: 2024-09-11T07:36:45Z
+ Message: Successfully paused database
+ Observed Generation: 1
+ Reason: DatabasePauseSucceeded
+ Status: True
+ Type: DatabasePauseSucceeded
+ Last Transition Time: 2024-09-11T07:37:00Z
+ Message: successfully deleted the petSets with orphan propagation policy
+ Observed Generation: 1
+ Reason: OrphanPetSetPods
+ Status: True
+ Type: OrphanPetSetPods
+ Last Transition Time: 2024-09-11T07:36:50Z
+ Message: get pet set; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: GetPetSet
+ Last Transition Time: 2024-09-11T07:36:50Z
+ Message: delete pet set; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: DeletePetSet
+ Last Transition Time: 2024-09-11T07:37:40Z
+ Message: successfully updated Leaf node PVC sizes
+ Observed Generation: 1
+ Reason: UpdateLeafNodePVCs
+ Status: True
+ Type: UpdateLeafNodePVCs
+ Last Transition Time: 2024-09-11T07:37:05Z
+ Message: get pvc; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: GetPvc
+ Last Transition Time: 2024-09-11T07:37:06Z
+ Message: is pvc patched; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: IsPvcPatched
+ Last Transition Time: 2024-09-11T07:37:15Z
+ Message: compare storage; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: CompareStorage
+ Last Transition Time: 2024-09-11T07:37:46Z
+ Message: successfully reconciled the Singlestore resources
+ Observed Generation: 1
+ Reason: UpdatePetSets
+ Status: True
+ Type: UpdatePetSets
+ Last Transition Time: 2024-09-11T07:37:51Z
+ Message: PetSet is recreated
+ Observed Generation: 1
+ Reason: ReadyPetSets
+ Status: True
+ Type: ReadyPetSets
+ Last Transition Time: 2024-09-11T07:38:19Z
+ Message: Successfully completed volumeExpansion for Singlestore
+ Observed Generation: 1
+ Reason: Successful
+ Status: True
+ Type: Successful
+ Observed Generation: 1
+ Phase: Successful
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Starting 6m4s KubeDB Ops-manager Operator Start processing for SinglestoreOpsRequest: demo/sdbops-sdb-sample-th2r62
+ Normal Starting 6m4s KubeDB Ops-manager Operator Pausing Singlestore database: demo/sdb-sample
+ Normal Successful 6m4s KubeDB Ops-manager Operator Successfully paused Singlestore database: demo/sdb-sample for SinglestoreOpsRequest: sdbops-sdb-sample-th2r62
+ Warning get pet set; ConditionStatus:True 5m56s KubeDB Ops-manager Operator get pet set; ConditionStatus:True
+ Warning delete pet set; ConditionStatus:True 5m56s KubeDB Ops-manager Operator delete pet set; ConditionStatus:True
+ Warning get pet set; ConditionStatus:True 5m51s KubeDB Ops-manager Operator get pet set; ConditionStatus:True
+ Normal OrphanPetSetPods 5m46s KubeDB Ops-manager Operator successfully deleted the petSets with orphan propagation policy
+ Warning get pvc; ConditionStatus:True 5m41s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning is pvc patched; ConditionStatus:True 5m40s KubeDB Ops-manager Operator is pvc patched; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 5m36s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning compare storage; ConditionStatus:False 5m36s KubeDB Ops-manager Operator compare storage; ConditionStatus:False
+ Warning get pvc; ConditionStatus:True 5m31s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning compare storage; ConditionStatus:True 5m31s KubeDB Ops-manager Operator compare storage; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 5m26s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning is pvc patched; ConditionStatus:True 5m26s KubeDB Ops-manager Operator is pvc patched; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 5m21s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning compare storage; ConditionStatus:True 5m21s KubeDB Ops-manager Operator compare storage; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 5m16s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning is pvc patched; ConditionStatus:True 5m16s KubeDB Ops-manager Operator is pvc patched; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 5m11s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning compare storage; ConditionStatus:True 5m11s KubeDB Ops-manager Operator compare storage; ConditionStatus:True
+ Normal UpdateLeafNodePVCs 5m6s KubeDB Ops-manager Operator successfully updated Leaf node PVC sizes
+ Normal UpdatePetSets 5m KubeDB Ops-manager Operator successfully reconciled the Singlestore resources
+ Warning get pet set; ConditionStatus:True 4m55s KubeDB Ops-manager Operator get pet set; ConditionStatus:True
+ Normal ReadyPetSets 4m55s KubeDB Ops-manager Operator PetSet is recreated
+ Normal Starting 4m27s KubeDB Ops-manager Operator Resuming Singlestore database: demo/sdb-sample
+ Normal Successful 4m27s KubeDB Ops-manager Operator Successfully resumed Singlestore database: demo/sdb-sample for SinglestoreOpsRequest: sdbops-sdb-sample-th2r62
+```
+
+Now, we are going to verify from the `Petset`, and the `Persistent Volume` whether the volume of the combined cluster has expanded to meet the desired state, Let's check,
+
+```bash
+kubectl get petset -n demo sdb-sample-leaf -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
+"15696033792"
+
+~ $ kubectl get pv -n demo | grep 'leaf'
+pvc-8df67f3178964106 15328158Ki RWO Retain Bound demo/data-sdb-sample-leaf-2 linode-block-storage-retain 42m
+pvc-c50e0d73d07349f9 15328158Ki RWO Retain Bound demo/data-sdb-sample-leaf-0 linode-block-storage-retain 43m
+pvc-f8b95ff9a9bd4fa2 15328158Ki RWO Retain Bound demo/data-sdb-sample-leaf-1 linode-block-storage-retain 42m
+
+```
+
+The above output verifies that we have successfully autoscaled the volume of the SingleStore cluster.
+
+## Cleaning Up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+```bash
+kubectl delete singlestoreopsrequests -n demo sdbops-sdb-sample-th2r62
+kubectl delete singlestoreautoscaler -n demo sdb-storage-autoscaler
+kubectl delete sdb -n demo sdb-sample
+```
+
+## Next Steps
+
+- Detail concepts of [SingleStore object](/docs/guides/singlestore/concepts/singlestore.md).
+- Different SingleStore clustering modes [here](/docs/guides/singlestore/clustering/_index.md).
+- Monitor your SingleStore database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/singlestore/monitoring/prometheus-operator/index.md).
+- Monitor your SingleStore database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/singlestore/monitoring/builtin-prometheus/index.md)
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
\ No newline at end of file
diff --git a/docs/guides/singlestore/autoscaler/storage/overview.md b/docs/guides/singlestore/autoscaler/storage/overview.md
new file mode 100644
index 0000000000..95ada0dc27
--- /dev/null
+++ b/docs/guides/singlestore/autoscaler/storage/overview.md
@@ -0,0 +1,57 @@
+---
+title: SingleStore Storage Autoscaling Overview
+menu:
+ docs_{{ .version }}:
+ identifier: sdn-storage-auto-scaling-overview
+ name: Overview
+ parent: sdb-storage-auto-scaling
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# SingleStore Vertical Autoscaling
+
+This guide will give an overview on how KubeDB Autoscaler operator autoscales the database storage using `singlestoreautoscaler` crd.
+
+## Before You Begin
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [SingleStore](/docs/guides/singlestore/concepts/singlestore.md)
+ - [SingleStoreAutoscaler](/docs/guides/singlestore/concepts/autoscaler.md)
+ - [SingleStoreOpsRequest](/docs/guides/singlestore/concepts/opsrequest.md)
+
+## How Storage Autoscaling Works
+
+The following diagram shows how KubeDB Autoscaler operator autoscales the resources of `SingleStore` cluster components. Open the image in a new tab to see the enlarged version.
+
+
+
+
+The Auto Scaling process consists of the following steps:
+
+1. At first, a user creates a `SingleStore` Custom Resource (CR).
+
+2. `KubeDB` Provisioner operator watches the `SingleStore` CR.
+
+3. When the operator finds a `SingleStore` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc.
+
+- Each PetSet creates a Persistent Volume according to the Volume Claim Template provided in the petset configuration.
+
+4. Then, in order to set up storage autoscaling of the various components (ie. Aggregator, Leaf, Standalone.) of the `singlestore` cluster, the user creates a `SingleStoreAutoscaler` CRO with desired configuration.
+
+5. `KubeDB` Autoscaler operator watches the `SingleStoreAutoscaler` CRO.
+
+6. `KubeDB` Autoscaler operator continuously watches persistent volumes of the clusters to check if it exceeds the specified usage threshold.
+- If the usage exceeds the specified usage threshold, then `KubeDB` Autoscaler operator creates a `SinglestoreOpsRequest` to expand the storage of the database.
+
+7. `KubeDB` Ops-manager operator watches the `SinglestoreOpsRequest` CRO.
+
+8. Then the `KubeDB` Ops-manager operator will expand the storage of the cluster component as specified on the `SinglestoreOpsRequest` CRO.
+
+In the next docs, we are going to show a step by step guide on Autoscaling storage of various Kafka cluster components using `SinglestoreAutoscaler` CRD.
diff --git a/docs/guides/singlestore/backup/_index.md b/docs/guides/singlestore/backup/_index.md
index f6b4e6f801..60dfa7346c 100644
--- a/docs/guides/singlestore/backup/_index.md
+++ b/docs/guides/singlestore/backup/_index.md
@@ -1,5 +1,5 @@
---
-title: Backup & Restore SingleStore
+title: Monitoring SingleStore
menu:
docs_{{ .version }}:
identifier: guides-sdb-backup
diff --git a/docs/guides/singlestore/backup/kubestash/application-level/index.md b/docs/guides/singlestore/backup/kubestash/application-level/index.md
index 64b7929728..e998a33e02 100644
--- a/docs/guides/singlestore/backup/kubestash/application-level/index.md
+++ b/docs/guides/singlestore/backup/kubestash/application-level/index.md
@@ -3,7 +3,7 @@ title: Application Level Backup & Restore SingleStore | KubeStash
description: Application Level Backup and Restore using KubeStash
menu:
docs_{{ .version }}:
- identifier: guides-application-level-backup-stashv2
+ identifier: guides-sdb-application-level-backup-stashv2
name: Application Level Backup
parent: guides-sdb-backup-stashv2
weight: 40
@@ -165,7 +165,7 @@ sample-singlestore-pods ClusterIP None 3306/TCP
```
-Here, we have to use service `sample-singlestore` and secret `sample-singlestore-root-cred` to connect with the database. `KubeDB` creates an [AppBinding](/docs/guides/mysql/concepts/appbinding/index.md) CR that holds the necessary information to connect with the database.
+Here, we have to use service `sample-singlestore` and secret `sample-singlestore-root-cred` to connect with the database. `KubeDB` creates an [AppBinding](/docs/guides/singlestore/concepts/appbinding.md) CR that holds the necessary information to connect with the database.
**Verify AppBinding:**
@@ -493,7 +493,7 @@ If everything goes well, the phase of the `BackupConfiguration` should be `Ready
```bash
$ kubectl get backupconfiguration -n demo
-NAME PHASE PAUSED AGE
+NAME PHASE PAUSED AGE
sample-singlestore-backup Ready 2m50s
```
@@ -501,7 +501,7 @@ Additionally, we can verify that the `Repository` specified in the `BackupConfig
```bash
$ kubectl get repo -n demo
-NAME INTEGRITY SNAPSHOT-COUNT SIZE PHASE LAST-SUCCESSFUL-BACKUP AGE
+NAME INTEGRITY SNAPSHOT-COUNT SIZE PHASE LAST-SUCCESSFUL-BACKUP AGE
gcs-singlestore-repo 0 0 B Ready 3m
```
@@ -515,8 +515,8 @@ Verify that the `CronJob` has been created using the following command,
```bash
$ kubectl get cronjob -n demo
-NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
-trigger-sample-singlestore-backup-frequent-backup */5 * * * * 0 2m45s 3m25s
+NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
+trigger-sample-singlestore-backup-frequent-backup */5 * * * * 0 2m45s 3m25s
```
**Verify BackupSession:**
@@ -528,7 +528,7 @@ Run the following command to watch `BackupSession` CR,
```bash
$ kubectl get backupsession -n demo -w
-NAME INVOKER-TYPE INVOKER-NAME PHASE DURATION AGE
+NAME INVOKER-TYPE INVOKER-NAME PHASE DURATION AGE
sample-singlestore-backup-frequent-backup-1724065200 BackupConfiguration sample-singlestore-backup Succeeded 7m22s
```
@@ -540,8 +540,8 @@ Once a backup is complete, KubeStash will update the respective `Repository` CR
```bash
$ kubectl get repository -n demo gcs-singlestore-repo
-NAME INTEGRITY SNAPSHOT-COUNT SIZE PHASE LAST-SUCCESSFUL-BACKUP AGE
-gcs-singlestore-repo true 1 806 B Ready 8m27s 9m18s
+NAME INTEGRITY SNAPSHOT-COUNT SIZE PHASE LAST-SUCCESSFUL-BACKUP AGE
+gcs-singlestore-repo true 1 806 B Ready 8m27s 9m18s
```
At this moment we have one `Snapshot`. Run the following command to check the respective `Snapshot` which represents the state of a backup run for an application.
diff --git a/docs/guides/singlestore/clustering/_index.md b/docs/guides/singlestore/clustering/_index.md
new file mode 100644
index 0000000000..58610731fd
--- /dev/null
+++ b/docs/guides/singlestore/clustering/_index.md
@@ -0,0 +1,10 @@
+---
+title: Clustering
+menu:
+ docs_{{ .version }}:
+ identifier: sdb-clustering
+ name: Clustering
+ parent: guides-singlestore
+ weight: 25
+menu_name: docs_{{ .version }}
+---
\ No newline at end of file
diff --git a/docs/guides/singlestore/clustering/overview/images/sdb-cluster-1.png b/docs/guides/singlestore/clustering/overview/images/sdb-cluster-1.png
new file mode 100644
index 0000000000..b008f51987
Binary files /dev/null and b/docs/guides/singlestore/clustering/overview/images/sdb-cluster-1.png differ
diff --git a/docs/guides/singlestore/clustering/overview/images/sdb-cluster-2.png b/docs/guides/singlestore/clustering/overview/images/sdb-cluster-2.png
new file mode 100644
index 0000000000..72e9db9808
Binary files /dev/null and b/docs/guides/singlestore/clustering/overview/images/sdb-cluster-2.png differ
diff --git a/docs/guides/singlestore/clustering/overview/images/sdb-cluster.png b/docs/guides/singlestore/clustering/overview/images/sdb-cluster.png
new file mode 100644
index 0000000000..f082999156
Binary files /dev/null and b/docs/guides/singlestore/clustering/overview/images/sdb-cluster.png differ
diff --git a/docs/guides/singlestore/clustering/overview/index.md b/docs/guides/singlestore/clustering/overview/index.md
new file mode 100644
index 0000000000..94cf77c47a
--- /dev/null
+++ b/docs/guides/singlestore/clustering/overview/index.md
@@ -0,0 +1,90 @@
+---
+title: Cluster Overview
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-clustering-overview
+ name: Cluster Overview
+ parent: sdb-clustering
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# SingleStore Cluster
+
+Here we'll discuss some concepts about SingleStore Cluster.
+
+### what is SingleStore Cluster
+
+A `SingleStore cluster` is a distributed database system that consists of multiple servers (nodes) working together to provide high performance, scalability, and fault tolerance for data storage and processing. It is specifically designed to handle both transactional (OLTP) and analytical (OLAP) workloads, making it suitable for a wide range of real-time data use cases. Here’s a detailed look at what a SingleStore cluster is and how it functions:
+
+
+
+### Components of a SingleStore Cluster
+
+A SingleStore cluster is made up of two main types of nodes:
+
+- Aggregators:
+ - Purpose: These nodes act as query routers. Aggregators handle query parsing, optimization, and distribution to the other nodes in the cluster. They do not store data themselves.
+ - Role: Aggregators receive SQL queries, optimize them, and then route them to the appropriate leaf nodes that actually store and process the data.
+ - Benefits: They help balance the workload and ensure that queries are efficiently executed by leveraging the full processing power of the cluster.
+
+- Leaves:
+ - Purpose: These are the nodes responsible for data storage and processing. They store data in distributed partitions called shards.
+ - Role: Leaf nodes are responsible for executing the actual query tasks. They perform data retrieval, computation, and provide results back to the aggregators.
+ - Benefits: Leaf nodes ensure that data is distributed across the cluster, enabling horizontal scalability and high fault tolerance.
+
+### How a SingleStore Cluster Works
+
+- Data Sharding and Partitioning:
+ - In a SingleStore cluster, data is partitioned into `shards` that are distributed across multiple leaf nodes. Each shard is a portion of the overall dataset, and the distribution allows the workload to be spread evenly, improving both read and write performance.
+ - Sharding also allows for `parallel processing`, which enhances query performance by splitting tasks among several nodes.
+
+- Scalability:
+ - SingleStore clusters can be `scaled horizontally` by adding more nodes (both leaf and aggregator). As data volume grows, adding more leaf nodes allows the system to continue performing efficiently without the need for massive hardware upgrades.
+ - Aggregator nodes can also be scaled to handle more queries concurrently, helping balance the load during times of high user activity.
+
+- High Availability and Fault Tolerance:
+ - SingleStore clusters maintain multiple replicas of each shard on different leaf nodes. This replication provides `high availability (HA)` because if one node fails, another node holding a replica can take over, ensuring no data loss and minimizing downtime.
+ - The automatic failover and `self-healing` capabilities ensure that the system continues to operate smoothly even in the face of hardware or software failures.
+
+- Distributed Query Processing:
+ - When a query is submitted to an aggregator, it breaks down the query into smaller tasks and sends them to relevant leaf nodes.
+ - `Parallel processing` at the leaf nodes enables quick handling of large, complex queries, making it particularly effective for real-time analytics.
+
+- Hybrid Workload Handling:
+ - SingleStore is a `unified database`, meaning it can handle `both OLTP (Online Transaction Processing)` and `OLAP (Online Analytical Processing)` workloads within the same cluster.
+ - This capability is achieved by storing data in rowstore for fast transactions and `columnstore` for efficient analytical queries, which can be leveraged simultaneously.
+
+### Key Features of a SingleStore Cluster
+
+- Elastic Scaling: Nodes can be added or removed without significant downtime, allowing the system to adjust to changing workload requirements.
+- In-Memory Storage: Data can be stored in memory to enhance processing speed, particularly useful for applications requiring real-time performance.
+- Cloud Integration: SingleStore clusters are designed to work well in cloud environments, supporting deployments on cloud infrastructure or container orchestration platforms like `Kubernetes`.
+
+### Use Cases
+
+- Real-Time Analytics: The combination of in-memory processing and distributed architecture allows SingleStore clusters to handle real-time analytical queries over large datasets, which is valuable in industries like finance, retail, and IoT.
+- Mixed Workloads: SingleStore can handle simultaneous read-heavy analytics and write-heavy transactional workloads, making it a good choice for applications that need both low-latency transactions and in-depth data analysis.
+- Data Warehousing: The ability to process large volumes of data quickly also makes SingleStore suitable for `modern data warehousing`, where performance is crucial for handling big data operations.
+
+### Benefits of SingleStore Clusters
+
+- High Throughput: The distributed nature allows the system to support high data ingestion rates and large-scale analytical processing.
+- Fault Tolerance: With multiple replicas of each shard, SingleStore clusters provide redundancy, helping to ensure that data is not lost and the system remains available.
+- Simplified Management: SingleStore offers tools that simplify the management of clusters, including auto-failover and data rebalancing.
+
+### Limitations
+
+- Resource Overhead: Running a distributed cluster comes with extra costs in terms of hardware or cloud resources, especially due to the need for replication.
+- Complexity in Management: Managing a large cluster, particularly in hybrid cloud or on-prem environments, can become complex and requires knowledge of distributed systems.
+- Network Dependency: The cluster performance relies heavily on the network, and any issues with network latency or bandwidth can impact overall efficiency.
+
+## Next Steps
+
+- [Deploy SingleStore Cluster](/docs/guides/singlestore/clustering/singlestore-clustering) using KubeDB.
\ No newline at end of file
diff --git a/docs/guides/singlestore/clustering/singlestore-clustering/examples/sample-sdb.yaml b/docs/guides/singlestore/clustering/singlestore-clustering/examples/sample-sdb.yaml
new file mode 100644
index 0000000000..6724836ede
--- /dev/null
+++ b/docs/guides/singlestore/clustering/singlestore-clustering/examples/sample-sdb.yaml
@@ -0,0 +1,52 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sample-sdb
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "0.6"
+ requests:
+ memory: "2Gi"
+ cpu: "0.6"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "0.6"
+ requests:
+ memory: "2Gi"
+ cpu: "0.6"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
\ No newline at end of file
diff --git a/docs/guides/singlestore/clustering/singlestore-clustering/index.md b/docs/guides/singlestore/clustering/singlestore-clustering/index.md
new file mode 100644
index 0000000000..02a5ebab14
--- /dev/null
+++ b/docs/guides/singlestore/clustering/singlestore-clustering/index.md
@@ -0,0 +1,458 @@
+---
+title: Cluster Guide
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-clustering
+ name: Cluster Guide
+ parent: sdb-clustering
+ weight: 20
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# KubeDB - SingleStore Cluster
+
+This tutorial will show you how to use KubeDB to provision a `singlestore cluster`.
+
+## Before You Begin
+
+Before proceeding:
+
+- Read [singlestore cluster concept](/docs/guides/singlestore/clustering/overview) to learn about SingleStore cluster.
+
+- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial:
+
+ ```bash
+ $ kubectl create ns demo
+ namespace/demo created
+ ```
+
+> Note: The yaml files used in this tutorial are stored in [docs/examples/singlestore](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/singlestore/clustering/singlestore-clustering/examples) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
+
+## Create SingleStore License Secret
+
+We need SingleStore License to create SingleStore Database. So, Ensure that you have acquired a license and then simply pass the license by secret.
+
+```bash
+$ kubectl create secret generic -n demo license-secret \
+ --from-literal=username=license \
+ --from-literal=password='your-license-set-here'
+secret/license-secret created
+```
+
+## Create a SingleStore database
+
+KubeDB implements a `Singlestore` CRD to define the specification of a SingleStore database. Below is the `Singlestore` object created in this tutorial.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sample-sdb
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "0.6"
+ requests:
+ memory: "2Gi"
+ cpu: "0.6"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "0.6"
+ requests:
+ memory: "2Gi"
+ cpu: "0.6"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
+```
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/clustering/singlestore-clustering/examples/sample-sdb.yaml
+singlestore.kubedb.com/sample-sdb created
+```
+Here,
+
+- `spec.version` is the name of the SinglestoreVersion CRD where the docker images are specified. In this tutorial, a SingleStore `8.7.10` database is going to be created.
+- `spec.topology` specifies that it will be used as cluster mode. If this field is nil it will be work as standalone mode.
+- `spec.topology.aggregator.replicas` or `spec.topology.leaf.replicas` specifies that the number replicas that will be used for aggregator or leaf.
+- `spec.storageType` specifies the type of storage that will be used for SingleStore database. It can be `Durable` or `Ephemeral`. Default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create SingleStore database using `EmptyDir` volume. In this case, you don't have to specify `spec.storage` field. This is useful for testing purposes.
+- `spec.topology.aggregator.storage` or `spec.topology.leaf.storage` specifies the StorageClass of PVC dynamically allocated to store data for this database. This storage spec will be passed to the PetSet created by KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests.
+- `spec.deletionPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `Singlestore` crd or which resources KubeDB should keep or delete when you delete `Singlestore` crd. If admission webhook is enabled, It prevents users from deleting the database as long as the `spec.deletionPolicy` is set to `DoNotTerminate`. Learn details of all `DeletionPolicy` [here](/docs/guides/mysql/concepts/database/index.md#specdeletionpolicy)
+
+> Note: `spec.storage` section is used to create PVC for database pod. It will create PVC with storage size specified in `storage.resources.requests` field. Don't specify limits here. PVC does not get resized automatically.
+
+KubeDB operator watches for `Singlestore` objects using Kubernetes api. When a `Singlestore` object is created, KubeDB operator will create new PetSet and Service with the matching SingleStore object name. KubeDB operator will also create a governing service for PetSets, if one is not already present.
+
+```bash
+$ kubectl get petset,pvc,pv,svc -n demo
+NAME AGE
+petset.apps.k8s.appscode.com/sample-sdb-aggregator 16m
+petset.apps.k8s.appscode.com/sample-sdb-leaf 16m
+
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
+persistentvolumeclaim/data-sample-sdb-aggregator-0 Bound pvc-a6c9041cba69454a 10Gi RWO linode-block-storage-retain 16m
+persistentvolumeclaim/data-sample-sdb-leaf-0 Bound pvc-674ba189a2f24383 10Gi RWO linode-block-storage-retain 16m
+persistentvolumeclaim/data-sample-sdb-leaf-1 Bound pvc-16e4224adec54d96 10Gi RWO linode-block-storage-retain 16m
+
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
+persistentvolume/pvc-16e4224adec54d96 10Gi RWO Retain Bound demo/data-sample-sdb-leaf-1 linode-block-storage-retain 16m
+persistentvolume/pvc-674ba189a2f24383 10Gi RWO Retain Bound demo/data-sample-sdb-leaf-0 linode-block-storage-retain 16m
+persistentvolume/pvc-a6c9041cba69454a 10Gi RWO Retain Bound demo/data-sample-sdb-aggregator-0 linode-block-storage-retain 16m
+
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+service/sample-sdb ClusterIP 10.128.15.230 3306/TCP,8081/TCP 16m
+service/sample-sdb-pods ClusterIP None 3306/TCP 16m
+
+
+```
+
+KubeDB operator sets the `status.phase` to `Running` once the database is successfully created. Run the following command to see the modified Singlestore object:
+
+```yaml
+$ kubectl get sdb -n demo sample-sdb -oyaml
+kind: Singlestore
+metadata:
+ annotations:
+ kubectl.kubernetes.io/last-applied-configuration: |
+ {"apiVersion":"kubedb.com/v1alpha2","kind":"Singlestore","metadata":{"annotations":{},"name":"sample-sdb","namespace":"demo"},"spec":{"deletionPolicy":"WipeOut","licenseSecret":{"name":"license-secret"},"storageType":"Durable","topology":{"aggregator":{"podTemplate":{"spec":{"containers":[{"name":"singlestore","resources":{"limits":{"cpu":"0.6","memory":"2Gi"},"requests":{"cpu":"0.6","memory":"2Gi"}}}]}},"replicas":1,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}}}},"leaf":{"podTemplate":{"spec":{"containers":[{"name":"singlestore","resources":{"limits":{"cpu":"0.6","memory":"2Gi"},"requests":{"cpu":"0.6","memory":"2Gi"}}}]}},"replicas":2,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"10Gi"}}}}},"version":"8.7.10"}}
+ creationTimestamp: "2024-10-01T09:39:36Z"
+ finalizers:
+ - kubedb.com
+ generation: 2
+ name: sample-sdb
+ namespace: demo
+ resourceVersion: "117016"
+ uid: 22b254e0-d185-413c-888f-ca4c2524e909
+spec:
+ authSecret:
+ name: sample-sdb-root-cred
+ deletionPolicy: WipeOut
+ healthChecker:
+ failureThreshold: 1
+ periodSeconds: 10
+ timeoutSeconds: 10
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ topology:
+ aggregator:
+ podTemplate:
+ controller: {}
+ metadata: {}
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ cpu: 600m
+ memory: 2Gi
+ requests:
+ cpu: 600m
+ memory: 2Gi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ runAsGroup: 998
+ runAsNonRoot: true
+ runAsUser: 999
+ seccompProfile:
+ type: RuntimeDefault
+ - name: singlestore-coordinator
+ resources:
+ limits:
+ memory: 256Mi
+ requests:
+ cpu: 200m
+ memory: 256Mi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ runAsGroup: 998
+ runAsNonRoot: true
+ runAsUser: 999
+ seccompProfile:
+ type: RuntimeDefault
+ initContainers:
+ - name: singlestore-init
+ resources:
+ limits:
+ memory: 512Mi
+ requests:
+ cpu: 200m
+ memory: 512Mi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ runAsGroup: 998
+ runAsNonRoot: true
+ runAsUser: 999
+ seccompProfile:
+ type: RuntimeDefault
+ podPlacementPolicy:
+ name: default
+ securityContext:
+ fsGroup: 999
+ replicas: 1
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ podTemplate:
+ controller: {}
+ metadata: {}
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ cpu: 600m
+ memory: 2Gi
+ requests:
+ cpu: 600m
+ memory: 2Gi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ runAsGroup: 998
+ runAsNonRoot: true
+ runAsUser: 999
+ seccompProfile:
+ type: RuntimeDefault
+ - name: singlestore-coordinator
+ resources:
+ limits:
+ memory: 256Mi
+ requests:
+ cpu: 200m
+ memory: 256Mi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ runAsGroup: 998
+ runAsNonRoot: true
+ runAsUser: 999
+ seccompProfile:
+ type: RuntimeDefault
+ initContainers:
+ - name: singlestore-init
+ resources:
+ limits:
+ memory: 512Mi
+ requests:
+ cpu: 200m
+ memory: 512Mi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ runAsGroup: 998
+ runAsNonRoot: true
+ runAsUser: 999
+ seccompProfile:
+ type: RuntimeDefault
+ podPlacementPolicy:
+ name: default
+ securityContext:
+ fsGroup: 999
+ replicas: 2
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ version: 8.7.10
+status:
+ conditions:
+ - lastTransitionTime: "2024-10-01T09:39:36Z"
+ message: 'The KubeDB operator has started the provisioning of Singlestore: demo/sample-sdb'
+ observedGeneration: 1
+ reason: DatabaseProvisioningStartedSuccessfully
+ status: "True"
+ type: ProvisioningStarted
+ - lastTransitionTime: "2024-10-01T09:57:51Z"
+ message: All leaf replicas are ready for Singlestore demo/sample-sdb
+ observedGeneration: 2
+ reason: AllReplicasReady
+ status: "True"
+ type: ReplicaReady
+ - lastTransitionTime: "2024-10-01T09:41:04Z"
+ message: database demo/sample-sdb is accepting connection
+ observedGeneration: 2
+ reason: AcceptingConnection
+ status: "True"
+ type: AcceptingConnection
+ - lastTransitionTime: "2024-10-01T09:41:04Z"
+ message: database demo/sample-sdb is ready
+ observedGeneration: 2
+ reason: AllReplicasReady
+ status: "True"
+ type: Ready
+ - lastTransitionTime: "2024-10-01T09:41:05Z"
+ message: 'The Singlestore: demo/sample-sdb is successfully provisioned.'
+ observedGeneration: 2
+ reason: DatabaseSuccessfullyProvisioned
+ status: "True"
+ type: Provisioned
+ phase: Ready
+
+```
+
+## Connect with SingleStore database
+
+KubeDB operator has created a new Secret called `sample-sdb-root-cred` *(format: {singlestore-object-name}-root-cred)* for storing the password for `singlestore` superuser. This secret contains a `username` key which contains the *username* for SingleStore superuser and a `password` key which contains the *password* for SingleStore superuser.
+
+If you want to use an existing secret please specify that when creating the SingleStore object using `spec.authSecret.name`. While creating this secret manually, make sure the secret contains these two keys containing data `username` and `password` and also make sure of using `root` as value of `username`. For more details see [here](/docs/guides/mysql/concepts/database/index.md#specdatabasesecret).
+
+Now, we need `username` and `password` to connect to this database from `kubectl exec` command. In this example `sample-sdb-root-cred` secret holds username and password
+
+```bash
+$ kubectl get pod -n demo sample-sdb-master-aggregator-0 -oyaml | grep podIP
+ podIP: 10.244.0.14
+$ kubectl get secrets -n demo sample-sdb-root-cred -o jsonpath='{.data.\username}' | base64 -d
+ root
+$ kubectl get secrets -n demo sample-sdb-root-cred -o jsonpath='{.data.\password}' | base64 -d
+ J0h_BUdJB8mDO31u
+```
+we will exec into the pod `sample-sdb-master-aggregator-0` and connect to the database using username and password
+
+```bash
+$ kubectl exec -it -n demo sample-sdb-aggregator-0 -- bash
+Defaulting container name to singlestore.
+Use 'kubectl describe pod/sample-sdb-aggregator-0 -n demo' to see all of the containers in this pod.
+
+[memsql@sample-sdb-master-aggregator-0 /]$ memsql -uroot -p"J0h_BUdJB8mDO31u"
+singlestore-client: [Warning] Using a password on the command line interface can be insecure.
+Welcome to the MySQL monitor. Commands end with ; or \g.
+Your MySQL connection id is 1114
+Server version: 5.7.32 SingleStoreDB source distribution (compatible; MySQL Enterprise & MySQL Commercial)
+
+Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.
+
+Oracle is a registered trademark of Oracle Corporation and/or its
+affiliates. Other names may be trademarks of their respective
+owners.
+
+Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
+
+singlestore> show databases;
++--------------------+
+| Database |
++--------------------+
+| cluster |
+| information_schema |
+| memsql |
+| singlestore_health |
++--------------------+
+4 rows in set (0.00 sec)
+
+singlestore> CREATE TABLE playground.equipment ( id INT NOT NULL AUTO_INCREMENT, type VARCHAR(50), quant INT, color VARCHAR(25), PRIMARY KEY(id));
+Query OK, 0 rows affected, 1 warning (0.27 sec)
+
+singlestore> SHOW TABLES IN playground;
++----------------------+
+| Tables_in_playground |
++----------------------+
+| equipment |
++----------------------+
+1 row in set (0.00 sec)
+
+singlestore> INSERT INTO playground.equipment (type, quant, color) VALUES ("slide", 2, "blue");
+Query OK, 1 row affected (1.15 sec)
+
+singlestore> SELECT * FROM playground.equipment;
++----+-------+-------+-------+
+| id | type | quant | color |
++----+-------+-------+-------+
+| 1 | slide | 2 | blue |
++----+-------+-------+-------+
+1 row in set (0.14 sec)
+
+singlestore> exit
+Bye
+```
+You can also connect with database management tools like [singlestore-studio](https://docs.singlestore.com/db/v8.5/reference/singlestore-tools-reference/singlestore-studio/)
+
+You can simply access to SingleStore studio by forwarding the Primary service port to any of your localhost port. Or, Accessing through ExternalP's 8081 port is also an option.
+
+```bash
+$ kubectl port-forward -n demo service/sample-sdb 8081
+Forwarding from 127.0.0.1:8081 -> 8081
+Forwarding from [::1]:8081 -> 8081
+```
+Lets, open your browser and go to the http://localhost:8081 or with TLS https://localhost:8081 then click on `Add or Create Cluster` option.
+Then choose `Add Existing Cluster` and click on `next` and you will get an interface like that below:
+
+
+
+
+
+After giving the all information you can see like this below UI image.
+
+
+
+
+
+
+## Cleaning up
+
+To cleanup the Kubernetes resources created by this tutorial, run:
+
+```bash
+kubectl patch -n demo singlestore/sample-sdb -p '{"spec":{"deletionPolicy":"WipeOut"}}' --type="merge"
+kubectl delete -n demo singlestore/sample-sdb
+kubectl delete ns demo
+```
diff --git a/docs/guides/singlestore/concepts/_index.md b/docs/guides/singlestore/concepts/_index.md
new file mode 100644
index 0000000000..1d9314c17b
--- /dev/null
+++ b/docs/guides/singlestore/concepts/_index.md
@@ -0,0 +1,10 @@
+---
+title: SingleStore Concepts
+menu:
+ docs_{{ .version }}:
+ identifier: sdb-concepts-singlestore
+ name: Concepts
+ parent: guides-singlestore
+ weight: 20
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/singlestore/concepts/appbinding.md b/docs/guides/singlestore/concepts/appbinding.md
new file mode 100644
index 0000000000..56aec98790
--- /dev/null
+++ b/docs/guides/singlestore/concepts/appbinding.md
@@ -0,0 +1,152 @@
+---
+title: AppBinding CRD
+menu:
+ docs_{{ .version }}:
+ identifier: sdb-appbinding-concepts
+ name: AppBinding
+ parent: sdb-concepts-singlestore
+ weight: 20
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# AppBinding
+
+## What is AppBinding
+
+An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://appscode.com/blog/post/the-case-for-appbinding).
+
+If you deploy a database using [KubeDB](https://kubedb.com/docs/0.11.0/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database.
+
+KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`.
+
+## AppBinding CRD Specification
+
+Like any official Kubernetes resource, an `AppBinding` has `TypeMeta`, `ObjectMeta` and `Spec` sections. However, unlike other Kubernetes resources, it does not have a `Status` section.
+
+An `AppBinding` object created by `KubeDB` for SingleStore database is shown below,
+
+```yaml
+apiVersion: appcatalog.appscode.com/v1alpha1
+kind: AppBinding
+metadata:
+ annotations:
+ kubectl.kubernetes.io/last-applied-configuration: |
+ {"apiVersion":"kubedb.com/v1alpha2","kind":"Singlestore","metadata":{"annotations":{},"name":"sdb","namespace":"demo"},"spec":{"deletionPolicy":"WipeOut","licenseSecret":{"name":"license-secret"},"storageType":"Durable","topology":{"aggregator":{"podTemplate":{"spec":{"containers":[{"name":"singlestore","resources":{"limits":{"cpu":"600m","memory":"2Gi"},"requests":{"cpu":"600m","memory":"2Gi"}}}]}},"replicas":2,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}}}},"leaf":{"podTemplate":{"spec":{"containers":[{"name":"singlestore","resources":{"limits":{"cpu":"600m","memory":"2Gi"},"requests":{"cpu":"600m","memory":"2Gi"}}}]}},"replicas":2,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"10Gi"}}}}},"version":"8.7.10"}}
+ creationTimestamp: "2024-08-15T09:04:57Z"
+ generation: 1
+ labels:
+ app.kubernetes.io/component: database
+ app.kubernetes.io/instance: sdb
+ app.kubernetes.io/managed-by: kubedb.com
+ app.kubernetes.io/name: singlestores.kubedb.com
+ name: sdb
+ namespace: demo
+ ownerReferences:
+ - apiVersion: kubedb.com/v1alpha2
+ blockOwnerDeletion: true
+ controller: true
+ kind: Singlestore
+ name: sdb
+ uid: efeededa-7dc8-4e98-b4e4-0e0603adc32c
+ resourceVersion: "5633763"
+ uid: 77dc8123-52f6-4832-a6e2-478795e487b7
+spec:
+ appRef:
+ apiGroup: kubedb.com
+ kind: Singlestore
+ name: sdb
+ namespace: demo
+ clientConfig:
+ service:
+ name: sdb
+ path: /
+ port: 3306
+ scheme: tcp
+ url: tcp(sdb.demo.svc:3306)/
+ parameters:
+ apiVersion: config.kubedb.com/v1alpha1
+ kind: SinglestoreConfiguration
+ masterAggregator: sdb-aggregator-0.sdb-pods.demo.svc
+ stash:
+ addon:
+ backupTask:
+ name: ""
+ restoreTask:
+ name: ""
+ secret:
+ name: sdb-root-cred
+ type: kubedb.com/singlestore
+ version: 8.7.10
+```
+
+Here, we are going to describe the sections of an `AppBinding` crd.
+
+### AppBinding `Spec`
+
+An `AppBinding` object has the following fields in the `spec` section:
+
+#### spec.type
+
+`spec.type` is an optional field that indicates the type of the app that this `AppBinding` is pointing to. Stash uses this field to resolve the values of `TARGET_APP_TYPE`, `TARGET_APP_GROUP` and `TARGET_APP_RESOURCE` variables of [BackupBlueprint](https://appscode.com/products/stash/latest/concepts/crds/backupblueprint/) object.
+
+This field follows the following format: `/`. The above AppBinding is pointing to a `postgres` resource under `kubedb.com` group.
+
+Here, the variables are parsed as follows:
+
+| Variable | Usage |
+| --------------------- |--------------------------------------------------------------------------------------------------------------------------------------|
+| `TARGET_APP_GROUP` | Represents the application group where the respective app belongs (i.e: `kubedb.com`). |
+| `TARGET_APP_RESOURCE` | Represents the resource under that application group that this appbinding represents (i.e: `singlestore`). |
+| `TARGET_APP_TYPE` | Represents the complete type of the application. It's simply `TARGET_APP_GROUP/TARGET_APP_RESOURCE` (i.e: `kubedb.com/singlestore`). |
+
+#### spec.secret
+
+`spec.secret` specifies the name of the secret which contains the credentials that are required to access the database. This secret must be in the same namespace as the `AppBinding`.
+
+This secret must contain the following keys:
+
+SingleStore :
+
+| Key | Usage |
+|------------|------------------------------------------------|
+| `username` | Username of the target database. |
+| `password` | Password for the user specified by `password`. |
+
+#### spec.appRef
+appRef refers to the underlying application. It has 4 fields named `apiGroup`, `kind`, `name` & `namespace`.
+
+#### spec.clientConfig
+
+`spec.clientConfig` defines how to communicate with the target database. You can use either an URL or a Kubernetes service to connect with the database. You don't have to specify both of them.
+
+You can configure following fields in `spec.clientConfig` section:
+
+- **spec.clientConfig.url**
+
+ `spec.clientConfig.url` gives the location of the database, in standard URL form (i.e. `[scheme://]host:port/[path]`). This is particularly useful when the target database is running outside of the Kubernetes cluster. If your database is running inside the cluster, use `spec.clientConfig.service` section instead.
+
+> Note that, attempting to use a user or basic auth (e.g. `user:password@host:port`) is not allowed. Stash will insert them automatically from the respective secret. Fragments ("#...") and query parameters ("?...") are not allowed either.
+
+- **spec.clientConfig.service**
+
+ If you are running the database inside the Kubernetes cluster, you can use Kubernetes service to connect with the database. You have to specify the following fields in `spec.clientConfig.service` section if you manually create an `AppBinding` object.
+
+ - **name :** `name` indicates the name of the service that connects with the target database.
+ - **scheme :** `scheme` specifies the scheme (i.e. http, https) to use to connect with the database.
+ - **port :** `port` specifies the port where the target database is running.
+
+- **spec.clientConfig.insecureSkipTLSVerify**
+
+ `spec.clientConfig.insecureSkipTLSVerify` is used to disable TLS certificate verification while connecting with the database. We strongly discourage to disable TLS verification during backup. You should provide the respective CA bundle through `spec.clientConfig.caBundle` field instead.
+
+- **spec.clientConfig.caBundle**
+
+ `spec.clientConfig.caBundle` is a PEM encoded CA bundle which will be used to validate the serving certificate of the database.
+
+## Next Steps
+
+- Learn how to use KubeDB to manage various databases [here](/docs/guides/README.md).
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
\ No newline at end of file
diff --git a/docs/guides/singlestore/concepts/autoscaler.md b/docs/guides/singlestore/concepts/autoscaler.md
new file mode 100644
index 0000000000..7cdf268819
--- /dev/null
+++ b/docs/guides/singlestore/concepts/autoscaler.md
@@ -0,0 +1,152 @@
+---
+title: SingleStoreAutoscaler CRD
+menu:
+ docs_{{ .version }}:
+ identifier: sdb-autoscaler-concepts
+ name: SingleStoreAutoscaler
+ parent: sdb-concepts-singlestore
+ weight: 26
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# SingleStoreAutoscaler
+
+## What is SingleStoreAutoscaler
+
+`SingleStoreAutoscaler` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for autoscaling [SingleStore](https://www.singlestore.com/) compute resources and storage of database components in a Kubernetes native way.
+
+## SingleStoreAutoscaler CRD Specifications
+
+Like any official Kubernetes resource, a `SingleStoreAutoscaler` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections.
+
+Here, some sample `SingleStoreAutoscaler` CROs for autoscaling different components of database is given below:
+
+**Sample `SingleStoreAutoscaler` for cluster database:**
+
+```yaml
+apiVersion: autoscaling.kubedb.com/v1alpha1
+kind: SinglestoreAutoscaler
+metadata:
+ name: sdb-as-cluster
+ namespace: demo
+spec:
+ databaseRef:
+ name: sdb-sample
+ storage:
+ leaf:
+ trigger: "On"
+ usageThreshold: 30
+ scalingThreshold: 50
+ expansionMode: "Offline"
+ upperBound: "100Gi"
+ aggregator:
+ trigger: "On"
+ usageThreshold: 40
+ scalingThreshold: 50
+ expansionMode: "Offline"
+ upperBound: "100Gi"
+ compute:
+ aggregator:
+ trigger: "On"
+ podLifeTimeThreshold: 5m
+ minAllowed:
+ cpu: 900m
+ memory: 3000Mi
+ maxAllowed:
+ cpu: 2000m
+ memory: 6Gi
+ controlledResources: ["cpu", "memory"]
+ resourceDiffPercentage: 10
+ leaf:
+ trigger: "On"
+ podLifeTimeThreshold: 5m
+ minAllowed:
+ cpu: 900m
+ memory: 3000Mi
+ maxAllowed:
+ cpu: 2000m
+ memory: 6Gi
+ controlledResources: ["cpu", "memory"]
+ resourceDiffPercentage: 10
+```
+
+**Sample `SingleStoreAutoscaler` for standalone database:**
+
+```yaml
+apiVersion: autoscaling.kubedb.com/v1alpha1
+kind: SinglestoreAutoscaler
+metadata:
+ name: sdb-as-standalone
+ namespace: demo
+spec:
+ databaseRef:
+ name: sdb-standalone
+ storage:
+ node:
+ trigger: "On"
+ usageThreshold: 40
+ scalingThreshold: 50
+ expansionMode: "Offline"
+ upperBound: "100Gi"
+ compute:
+ node:
+ trigger: "On"
+ podLifeTimeThreshold: 5m
+ minAllowed:
+ cpu: 900m
+ memory: 3000Mi
+ maxAllowed:
+ cpu: 2000m
+ memory: 6Gi
+ controlledResources: ["cpu", "memory"]
+ resourceDiffPercentage: 10
+```
+
+Here, we are going to describe the various sections of a `SingleStoreAutoscaler` crd.
+
+A `SingleStoreAutoscaler` object has the following fields in the `spec` section.
+
+### spec.databaseRef
+
+`spec.databaseRef` is a required field that point to the [SingleStore](/docs/guides/singlestore/concepts/singlestore.md) object for which the autoscaling will be performed. This field consists of the following sub-field:
+
+- **spec.databaseRef.name :** specifies the name of the [SingleStore](/docs/guides/singlestore/concepts/singlestore.md) object.
+
+### spec.opsRequestOptions
+These are the options to pass in the internally created opsRequest CRO. `opsRequestOptions` has three fields. They have been described in details [here](/docs/guides/singlestore/concepts/opsrequest.md#specreadinesscriteria).
+
+### spec.compute
+
+`spec.compute` specifies the autoscaling configuration for the compute resources i.e. cpu and memory of the database components. This field consists of the following sub-field:
+
+- `spec.compute.standalone` indicates the desired compute autoscaling configuration for a standalone SingleStore database.
+- `spec.compute.aggregator` indicates the desired compute autoscaling configuration for aggregator node of cluster mode.
+- `spec.compute.leaf` indicates the desired compute autoscaling configuration for the leaf node of cluster mode.
+
+All of them has the following sub-fields:
+
+- `trigger` indicates if compute autoscaling is enabled for this component of the database. If "On" then compute autoscaling is enabled. If "Off" then compute autoscaling is disabled.
+- `minAllowed` specifies the minimal amount of resources that will be recommended, default is no minimum.
+- `maxAllowed` specifies the maximum amount of resources that will be recommended, default is no maximum.
+- `controlledResources` specifies which type of compute resources (cpu and memory) are allowed for autoscaling. Allowed values are "cpu" and "memory".
+- `containerControlledValues` specifies which resource values should be controlled. Allowed values are "RequestsAndLimits" and "RequestsOnly".
+- `resourceDiffPercentage` specifies the minimum resource difference between recommended value and the current value in percentage. If the difference percentage is greater than this value than autoscaling will be triggered.
+- `podLifeTimeThreshold` specifies the minimum pod lifetime of at least one of the pods before triggering autoscaling.
+
+### spec.storage
+
+`spec.compute` specifies the autoscaling configuration for the storage resources of the database components. This field consists of the following sub-field:
+
+- `spec.compute.standalone` indicates the desired storage autoscaling configuration for a standalone SingleStore database.
+- `spec.compute.leaf` indicates the desired storage autoscaling configuration for leaf node of cluster mode.
+- `spec.compute.aggregator` indicates the desired storage autoscaling configuration for aggregator node of cluster mode.
+
+All of them has the following sub-fields:
+
+- `trigger` indicates if storage autoscaling is enabled for this component of the database. If "On" then storage autoscaling is enabled. If "Off" then storage autoscaling is disabled.
+- `usageThreshold` indicates usage percentage threshold, if the current storage usage exceeds then storage autoscaling will be triggered.
+- `scalingThreshold` indicates the percentage of the current storage that will be scaled.
+- `expansionMode` indicates the volume expansion mode.
diff --git a/docs/guides/singlestore/concepts/catalog.md b/docs/guides/singlestore/concepts/catalog.md
new file mode 100644
index 0000000000..b9b143ff4e
--- /dev/null
+++ b/docs/guides/singlestore/concepts/catalog.md
@@ -0,0 +1,105 @@
+---
+title: SingleStoreVersion CRD
+menu:
+ docs_{{ .version }}:
+ identifier: sdb-catalog-concepts
+ name: SingleStoreVersion
+ parent: sdb-concepts-singlestore
+ weight: 15
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# SingleStoreVersion
+
+## What is SingleStoreVersion
+
+`SingleStoreVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for [SingleStore](https://www.singlestore.com/) database deployed with KubeDB in a Kubernetes native way.
+
+When you install KubeDB, a `SingleStoreVersion` custom resource will be created automatically for every supported SingleStore versions. You have to specify the name of `SingleStoreVersion` crd in `spec.version` field of [SingleStore](/docs/guides/singlestore/concepts/singlestore.md) crd. Then, KubeDB will use the docker images specified in the `SingleStoreVersion` crd to create your expected database.
+
+Using a separate crd for specifying respective docker images, and pod security policy names allow us to modify the images, and policies independent of KubeDB operator.This will also allow the users to use a custom image for the database.
+
+## SingleStoreVersion Spec
+
+As with all other Kubernetes objects, a SingleStoreVersion needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section.
+
+```yaml
+apiVersion: catalog.kubedb.com/v1alpha1
+kind: SinglestoreVersion
+metadata:
+ name: 8.7.10
+spec:
+ coordinator:
+ image: ghcr.io/kubedb/singlestore-coordinator:v0.3.0
+ db:
+ image: ghcr.io/appscode-images/singlestore-node:alma-8.7.10-95e2357384
+ initContainer:
+ image: ghcr.io/kubedb/singlestore-init:8.7.10-v1
+ securityContext:
+ runAsGroup: 998
+ runAsUser: 999
+ standalone:
+ image: singlestore/cluster-in-a-box:alma-8.7.10-95e2357384-4.1.0-1.17.14
+ updateConstraints:
+ allowlist:
+ - '> 8.7.10, <= 8.7.10'
+ version: 8.7.10
+```
+
+### metadata.name
+
+`metadata.name` is a required field that specifies the name of the `SingleStoreVersion` crd. You have to specify this name in `spec.version` field of [SingleStore](/docs/guides/singlestore/concepts/singlestore.md) crd.
+
+We follow this convention for naming SingleStoreVersion crd:
+
+- Name format: `{Original SingleStore image verion}-{modification tag}`
+
+We modify original SingleStore docker image to support SingleStore clustering and re-tag the image with v1, v2 etc. modification tag. An image with higher modification tag will have more features than the images with lower modification tag. Hence, it is recommended to use SingleStoreVersion crd with highest modification tag to enjoy the latest features.
+
+### spec.version
+
+`spec.version` is a required field that specifies the original version of SingleStore database that has been used to build the docker image specified in `spec.db.image` field.
+
+### spec.deprecated
+
+`spec.deprecated` is an optional field that specifies whether the docker images specified here is supported by the current KubeDB operator.
+
+The default value of this field is `false`. If `spec.deprecated` is set to `true`, KubeDB operator will skip processing this CRD object and will add a event to the CRD object specifying that the DB version is deprecated.
+
+### spec.db.image
+
+`spec.db.image` is a required field that specifies the docker image which will be used to create Petset by KubeDB operator to create expected SingleStore database.
+
+### spec.coordinator.image
+
+`spec.coordinator.image` is a required field that specifies the docker image which will be used to create Petset by KubeDB operator to create expected SingleStore database.
+
+### spec.initContainer.image
+
+`spec.initContainer.image` is a required field that specifies the image for init container.
+
+### spec.updateConstraints
+updateConstraints specifies the constraints that need to be considered during version update. Here `allowList` contains the versions those are allowed for updating from the current version.
+An empty list of AllowList indicates all the versions are accepted except the denyList.
+On the other hand, `DenyList` contains all the rejected versions for the update request. An empty list indicates no version is rejected.
+
+### spec.podSecurityPolicies.databasePolicyName
+
+`spec.podSecurityPolicies.databasePolicyName` is a required field that specifies the name of the pod security policy required to get the database server pod(s) running.
+
+```bash
+helm upgrade -i kubedb oci://ghcr.io/appscode-charts/kubedb \
+ --namespace kubedb --create-namespace \
+ --set additionalPodSecurityPolicies[0]=custom-db-policy \
+ --set additionalPodSecurityPolicies[1]=custom-snapshotter-policy \
+ --set-file global.license=/path/to/the/license.txt \
+ --wait --burst-limit=10000 --debug
+```
+
+## Next Steps
+
+- Learn about SingleStore crd [here](/docs/guides/singlestore/concepts/singlestore.md).
+- Deploy your first SingleStore database with KubeDB by following the guide [here](/docs/guides/singlestore/quickstart/quickstart.md).
\ No newline at end of file
diff --git a/docs/guides/singlestore/concepts/opsrequest.md b/docs/guides/singlestore/concepts/opsrequest.md
new file mode 100644
index 0000000000..3a04c5efb4
--- /dev/null
+++ b/docs/guides/singlestore/concepts/opsrequest.md
@@ -0,0 +1,475 @@
+---
+title: SingleStoreOpsRequests CRD
+menu:
+ docs_{{ .version }}:
+ identifier: sdb-opsrequest-concepts
+ name: SingleStoreOpsRequest
+ parent: sdb-concepts-singlestore
+ weight: 25
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# SingleStoreOpsRequest
+
+## What is SingleStoreOpsRequest
+
+`SingleStoreOpsRequest` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for [SingleStore](https://www.singlestore.com/) administrative operations like database version updating, horizontal scaling, vertical scaling etc. in a Kubernetes native way.
+
+## SingleStoreOpsRequest CRD Specifications
+
+Like any official Kubernetes resource, a `SingleStoreOpsRequest` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections.
+
+Here, some sample `SingleStoreOpsRequest` CRs for different administrative operations is given below:
+
+**Sample `SingleStoreOpsRequest` for updating database:**
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdb-version-upd
+ namespace: demo
+spec:
+ type: UpdateVersion
+ databaseRef:
+ name: sdb
+ updateVersion:
+ targetVersion: 8.7.10
+ timeout: 5m
+ apply: IfReady
+```
+
+**Sample `SingleStoreOpsRequest` Objects for Horizontal Scaling of different component of the database:**
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdb-hscale
+ namespace: demo
+spec:
+ type: HorizontalScaling
+ databaseRef:
+ name: sdb
+ horizontalScaling:
+ aggregator: 2
+ leaf: 3
+```
+
+**Sample `SingleStoreOpsRequest` Objects for Vertical Scaling of different component of the database:**
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdb-scale
+ namespace: demo
+spec:
+ type: VerticalScaling
+ databaseRef:
+ name: sdb-sample
+ verticalScaling:
+ leaf:
+ resources:
+ requests:
+ memory: "2500Mi"
+ cpu: "0.7"
+ limits:
+ memory: "2500Mi"
+ cpu: "0.7"
+ coordinator:
+ resources:
+ requests:
+ memory: "2500Mi"
+ cpu: "0.7"
+ limits:
+ memory: "2500Mi"
+ cpu: "0.7"
+ node:
+ resources:
+ requests:
+ memory: "2500Mi"
+ cpu: "0.7"
+ limits:
+ memory: "2500Mi"
+ cpu: "0.7"
+```
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdb-scale
+ namespace: demo
+spec:
+ type: VerticalScaling
+ databaseRef:
+ name: sdb-standalone
+ verticalScaling:
+ node:
+ resources:
+ requests:
+ memory: "2500Mi"
+ cpu: "0.7"
+ limits:
+ memory: "2500Mi"
+ cpu: "0.7"
+```
+
+**Sample `SingleStoreOpsRequest` Objects for Reconfiguring different database components:**
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdbops-reconfigure-config
+ namespace: demo
+spec:
+ type: Reconfigure
+ databaseRef:
+ name: sdb-sample
+ configuration:
+ aggregator:
+ applyConfig:
+ sdb-apply.cnf: |-
+ max_connections = 550
+ leaf:
+ applyConfig:
+ sdb-apply.cnf: |-
+ max_connections = 550
+```
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdbops-reconfigure-config
+ namespace: demo
+spec:
+ type: Reconfigure
+ databaseRef:
+ name: sdb-standalone
+ configuration:
+ node:
+ applyConfig:
+ sdb-apply.cnf: |-
+ max_connections = 550
+```
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdbops-reconfigure-config
+ namespace: demo
+spec:
+ type: Reconfigure
+ databaseRef:
+ name: sdb-sample
+ configuration:
+ aggregator:
+ configSecret:
+ name: sdb-new-custom-config
+ leaf:
+ configSecret:
+ name: sdb-new-custom-config
+```
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdbops-reconfigure-config
+ namespace: demo
+spec:
+ type: Reconfigure
+ databaseRef:
+ name: sdb-standalone
+ configuration:
+ node:
+ configSecret:
+ name: sdb-new-custom-config
+```
+
+**Sample `SingleStoreOpsRequest` Objects for Volume Expansion of different database components:**
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdb-volume-ops
+ namespace: demo
+spec:
+ type: VolumeExpansion
+ databaseRef:
+ name: sdb-sample
+ volumeExpansion:
+ mode: "Offline"
+ aggregator: 10Gi
+ leaf: 20Gi
+```
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdb-volume-ops
+ namespace: demo
+spec:
+ type: VolumeExpansion
+ databaseRef:
+ name: sdb-standalone
+ volumeExpansion:
+ mode: "Online"
+ node: 20Gi
+```
+
+**Sample `SingleStoreOpsRequest` Objects for Reconfiguring TLS of the database:**
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdb-tls-reconfigure
+ namespace: demo
+spec:
+ type: ReconfigureTLS
+ databaseRef:
+ name: sdb-sample
+ tls:
+ issuerRef:
+ name: sdb-issuer
+ kind: Issuer
+ apiGroup: "cert-manager.io"
+ certificates:
+ - alias: client
+ subject:
+ organizations:
+ - singlestore
+ organizationalUnits:
+ - client
+```
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdb-tls-reconfigure
+ namespace: demo
+spec:
+ type: ReconfigureTLS
+ databaseRef:
+ name: sdb-sample
+ tls:
+ rotateCertificates: true
+```
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdb-tls-reconfigure
+ namespace: demo
+spec:
+ type: ReconfigureTLS
+ databaseRef:
+ name: sdb-sample
+ tls:
+ remove: true
+```
+
+Here, we are going to describe the various sections of a `SingleStoreOpsRequest` crd.
+
+A `SingleStoreOpsRequest` object has the following fields in the `spec` section.
+
+### spec.databaseRef
+
+`spec.databaseRef` is a required field that point to the [SingleStore](/docs/guides/singlestore/concepts/singlestore.md) object for which the administrative operations will be performed. This field consists of the following sub-field:
+
+- **spec.databaseRef.name :** specifies the name of the [SingleStore](/docs/guides/singlestore/concepts/singlestore.md) object.
+
+### spec.type
+
+`spec.type` specifies the kind of operation that will be applied to the database. Currently, the following types of operations are allowed in `SingleStoreOpsRequest`.
+
+- `Upgrade` / `UpdateVersion`
+- `HorizontalScaling`
+- `VerticalScaling`
+- `VolumeExpansion`
+- `Reconfigure`
+- `ReconfigureTLS`
+- `Restart`
+
+> You can perform only one type of operation on a single `SingleStoreOpsRequest` CR. For example, if you want to update your database and scale up its replica then you have to create two separate `SingleStoreOpsRequest`. At first, you have to create a `SingleStoreOpsRequest` for updating. Once it is completed, then you can create another `SingleStoreOpsRequest` for scaling.
+
+> Note: There is an exception to the above statement. It is possible to specify both `spec.configuration` & `spec.verticalScaling` in a OpsRequest of type `VerticalScaling`.
+
+### spec.updateVersion
+
+If you want to update you SingleStore version, you have to specify the `spec.updateVersion` section that specifies the desired version information. This field consists of the following sub-field:
+
+- `spec.updateVersion.targetVersion` refers to a [SingleStoreVersion](/docs/guides/singlestore/concepts/catalog.md) CR that contains the SingleStore version information where you want to update.
+
+Have a look on the [`updateConstraints`](/docs/guides/singlestore/concepts/catalog.md#specupdateconstraints) of the singlestoreVersion spec to know which versions are supported for updating from the current version.
+```yaml
+kubectl get sdbversion -o=jsonpath='{.spec.updateConstraints}' | jq
+```
+
+> You can only update between SingleStore versions. KubeDB does not support downgrade for SingleStore.
+
+### spec.horizontalScaling
+
+If you want to scale-up or scale-down your SingleStore cluster or different components of it, you have to specify `spec.horizontalScaling` section. This field consists of the following sub-field:
+
+- `spec.horizontalScaling.aggregator.replicas` indicates the desired number of aggregator nodes for cluster mode after scaling.
+- `spec.horizontalScaling.leaf.replicas` indicates the desired number of leaf nodes for cluster mode after scaling.
+
+### spec.verticalScaling
+
+`spec.verticalScaling` is a required field specifying the information of `SingleStore` resources like `cpu`, `memory` etc that will be scaled. This field consists of the following sub-fields:
+
+- `spec.verticalScaling.node` indicates the desired resources for standalone SingleStore database after scaling.
+- `spec.verticalScaling.aggregator` indicates the desired resources for aggregator node of SingleStore cluster after scaling.
+- `spec.verticalScaling.leaf` indicates the desired resources for leaf nodes of SingleStore cluster after scaling.
+- `spec.verticalScaling.coordinator` indicates the desired resources for the coordinator container.
+
+All of them has the below structure:
+
+```yaml
+requests:
+ memory: "2000Mi"
+ cpu: "0.7"
+limits:
+ memory: "3000Mi"
+ cpu: "0.9"
+```
+
+Here, when you specify the resource request, the scheduler uses this information to decide which node to place the container of the Pod on and when you specify a resource limit for the container, the `kubelet` enforces those limits so that the running container is not allowed to use more of that resource than the limit you set. You can found more details from [here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
+
+### spec.volumeExpansion
+
+> To use the volume expansion feature the storage class must support volume expansion
+
+If you want to expand the volume of your SingleStore cluster or different components of it, you have to specify `spec.volumeExpansion` section. This field consists of the following sub-field:
+
+- `spec.mode` specifies the volume expansion mode. Supported values are `Online` & `Offline`. The default is `Online`.
+- `spec.volumeExpansion.node` indicates the desired size for the persistent volume of a standalone SingleStore database.
+- `spec.volumeExpansion.aggregator` indicates the desired size for the persistent volume of aggregator node of cluster.
+- `spec.volumeExpansion.leaf` indicates the desired size for the persistent volume of leaf node of cluster.
+
+All of them refer to [Quantity](https://v1-22.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#quantity-resource-core) types of Kubernetes.
+
+Example usage of this field is given below:
+
+```yaml
+spec:
+ volumeExpansion:
+ aggregator: "20Gi"
+```
+
+This will expand the volume size of all the shard nodes to 20 GB.
+
+### spec.configuration
+
+If you want to reconfigure your Running SingleStore cluster or different components of it with new custom configuration, you have to specify `spec.configuration` section. This field consists of the following sub-field:
+
+- `spec.configuration.standalone` indicates the desired new custom configuration for a standalone SingleStore database.
+- `spec.configuration.aggregator` indicates the desired new custom configuration for aggregator node of cluster mode.
+- `spec.configuration.leaf` indicates the desired new custom configuration for leaf node of cluster mode.
+
+All of them has the following sub-fields:
+
+- `configSecret` points to a secret in the same namespace of a SingleStore resource, which contains the new custom configurations. If there are any configSecret set before in the database, this secret will replace it.
+- `applyConfig` contains the new custom config as a string which will be merged with the previous configuration.
+
+- `applyConfig` is a map where key supports values, namely `sdb-apply.cnf`. And value represents the corresponding configurations.
+KubeDB provisioner operator applies these two directly while reconciling.
+
+```yaml
+ applyConfig:
+ sdb-apply.cnf: |-
+ max_connections = 550
+```
+
+- `removeCustomConfig` is a boolean field. Specify this field to true if you want to remove all the custom configuration from the deployed singlestore server.
+
+### spec.tls
+
+If you want to reconfigure the TLS configuration of your database i.e. add TLS, remove TLS, update issuer/cluster issuer or Certificates and rotate the certificates, you have to specify `spec.tls` section. This field consists of the following sub-field:
+
+- `spec.tls.issuerRef` specifies the issuer name, kind and api group.
+- `spec.tls.certificates` specifies the certificates. You can learn more about this field from [here](/docs/guides/singlestore/concepts/singlestore.md#spectls).
+- `spec.tls.rotateCertificates` specifies that we want to rotate the certificate of this database.
+- `spec.tls.remove` specifies that we want to remove tls from this database.
+
+
+### spec.timeout
+As we internally retry the ops request steps multiple times, This `timeout` field helps the users to specify the timeout for those steps of the ops request (in second).
+If a step doesn't finish within the specified timeout, the ops request will result in failure.
+
+### spec.apply
+This field controls the execution of obsRequest depending on the database state. It has two supported values: `Always` & `IfReady`.
+Use IfReady, if you want to process the opsRequest only when the database is Ready. And use Always, if you want to process the execution of opsReq irrespective of the Database state.
+
+
+### SingleStoreOpsRequest `Status`
+
+`.status` describes the current state and progress of a `SingleStoreOpsRequest` operation. It has the following fields:
+
+### status.phase
+
+`status.phase` indicates the overall phase of the operation for this `SingleStoreOpsRequest`. It can have the following three values:
+
+| Phase | Meaning |
+|-------------|----------------------------------------------------------------------------------------|
+| Successful | KubeDB has successfully performed the operation requested in the SingleStoreOpsRequest |
+| Progressing | KubeDB has started the execution of the applied SingleStoreOpsRequest |
+| Failed | KubeDB has failed the operation requested in the SingleStoreOpsRequest |
+| Denied | KubeDB has denied the operation requested in the SingleStoreOpsRequest |
+| Skipped | KubeDB has skipped the operation requested in the SingleStoreOpsRequest |
+
+Important: Ops-manager Operator can skip an opsRequest, only if its execution has not been started yet & there is a newer opsRequest applied in the cluster. `spec.type` has to be same as the skipped one, in this case.
+
+### status.observedGeneration
+
+`status.observedGeneration` shows the most recent generation observed by the `SingleStoreOpsRequest` controller.
+
+### status.conditions
+
+`status.conditions` is an array that specifies the conditions of different steps of `SingleStoreOpsRequest` processing. Each condition entry has the following fields:
+
+- `types` specifies the type of the condition. SingleStoreOpsRequest has the following types of conditions:
+
+| Type | Meaning |
+|-----------------------------|----------------------------------------------------------------------------|
+| `Progressing` | Specifies that the operation is now in the progressing state |
+| `Successful` | Specifies such a state that the operation on the database was successful. |
+| `HaltDatabase` | Specifies such a state that the database is halted by the operator |
+| `ResumeDatabase` | Specifies such a state that the database is resumed by the operator |
+| `Failed` | Specifies such a state that the operation on the database failed. |
+| `StartingBalancer` | Specifies such a state that the balancer has successfully started |
+| `StoppingBalancer` | Specifies such a state that the balancer has successfully stopped |
+| `UpdatePetSetResources` | Specifies such a state that the PetSet resources has been updated |
+| `UpdateAggregatorResources` | Specifies such a state that the Aggregator resources has been updated |
+| `UpdateLeafResources` | Specifies such a state that the Leaf resources has been updated |
+| `UpdateNodeResources` | Specifies such a state that the node has been updated |
+| `ScaleDownAggregator` | Specifies such a state that the scale down operation of aggregator |
+| `ScaleUpAggregator` | Specifies such a state that the scale up operation of aggregator |
+| `ScaleUpLeaf` | Specifies such a state that the scale up operation of leaf |
+| `ScaleDownleaf` | Specifies such a state that the scale down operation of leaf |
+| `VolumeExpansion` | Specifies such a state that the volume expansion operation of the database |
+| `ReconfigureAggregator` | Specifies such a state that the reconfiguration of aggregator nodes |
+| `ReconfigureLeaf` | Specifies such a state that the reconfiguration of leaf nodes |
+| `ReconfigureNode` | Specifies such a state that the reconfiguration of standalone nodes |
+
+- The `status` field is a string, with possible values `True`, `False`, and `Unknown`.
+ - `status` will be `True` if the current transition succeeded.
+ - `status` will be `False` if the current transition failed.
+ - `status` will be `Unknown` if the current transition was denied.
+- The `message` field is a human-readable message indicating details about the condition.
+- The `reason` field is a unique, one-word, CamelCase reason for the condition's last transition.
+- The `lastTransitionTime` field provides a timestamp for when the operation last transitioned from one state to another.
+- The `observedGeneration` shows the most recent condition transition generation observed by the controller.
\ No newline at end of file
diff --git a/docs/guides/singlestore/concepts/singlestore.md b/docs/guides/singlestore/concepts/singlestore.md
new file mode 100644
index 0000000000..3d1b2fb66a
--- /dev/null
+++ b/docs/guides/singlestore/concepts/singlestore.md
@@ -0,0 +1,322 @@
+---
+title: SingleStore CRD
+menu:
+ docs_{{ .version }}:
+ identifier: sdb-singlestore-concepts
+ name: SingleStore
+ parent: sdb-concepts-singlestore
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+# SingleStore
+
+## What is SingleStore
+
+`SingleStore` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [SingleStore](https://www.singlestore.com/) in a Kubernetes native way. You only need to describe the desired database configuration in a SingleStore object, and the KubeDB operator will create Kubernetes objects in the desired state for you.
+
+## SingleStore Spec
+
+As with all other Kubernetes objects, a SingleStore needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. Below is an example SingleStore object.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sdb-sample
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 2
+ configSecret:
+ name: sdb-configuration
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "4Gi"
+ cpu: "1000m"
+ requests:
+ memory: "2Gi"
+ cpu: "500m"
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 3
+ configSecret:
+ name: sdb-configuration
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "5Gi"
+ cpu: "1100m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 40Gi
+ storageType: Durable
+ licenseSecret:
+ name: license-secret
+ authSecret:
+ name: given-secret
+ init:
+ script:
+ configMap:
+ name: sdb-init-script
+ monitor:
+ agent: prometheus.io/operator
+ prometheus:
+ exporter:
+ port: 9104
+ serviceMonitor:
+ labels:
+ release: prometheus
+ interval: 10s
+ deletionPolicy: WipeOut
+ tls:
+ issuerRef:
+ apiGroup: cert-manager.io
+ kind: Issuer
+ name: sdb-issuer
+ certificates:
+ - alias: server
+ subject:
+ organizations:
+ - kubedb:server
+ dnsNames:
+ - localhost
+ ipAddresses:
+ - "127.0.0.1"
+ serviceTemplates:
+ - alias: primary
+ metadata:
+ annotations:
+ passMe: ToService
+ spec:
+ type: NodePort
+ ports:
+ - name: http
+ port: 9200
+```
+
+### spec.version
+
+`spec.version` is a required field specifying the name of the [SinglestoreVersion](/docs/guides/singlestore/concepts/catalog.md) crd where the docker images are specified. Currently, when you install KubeDB, it creates the following `SinglestoreVersion` resources,
+
+- `8.1.32`
+- `8.5.7`, `8.5.30`
+- `8.7.10`
+
+### spec.topology
+
+`spec.topology` is an optional field that enables you to specify the clustering mode.
+
+- `aggregator` or `leaf` are optional field that configure cluster mode that contains the following fields:
+ - `replicas` the number of nodes of `aggregator` and `leaf` in cluster mode.
+ - `configSecret` is an optional field that points to a Secret used to hold custom SingleStore configuration.
+ - `podTemplate` providing a template for database. KubeDB operator will pass the information provided in `podTemplate` to the PetSet created for the SingleStore database. KubeDB accepts the following fields to set in `podTemplate:`
+ - metadata:
+ - annotations (pod's annotation)
+ - controller:
+ - annotations (petset's annotation)
+ - spec:
+ - initContainers
+ - imagePullSecrets
+ - resources
+ - containers
+ - nodeSelector
+ - serviceAccountName
+ - securityContext
+ - tolerations
+ - imagePullSecrets
+ - podPlacementPolicy
+ - volumes
+ - If you set `spec.storageType` to `Durable`, then `storage` is a required field that specifies the StorageClass of PVCs dynamically allocated to store data for the database. This storage spec will be passed to the PetSet created by KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests.
+ - `storage.storageClassName` is the name of the StorageClass used to provision PVCs. PVCs don’t necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster depending on whether the DefaultStorageClass admission plugin is turned on.
+ - `storage.accessModes` uses the same conventions as Kubernetes PVCs when requesting storage with specific access modes.
+ - `storage.resources` can be used to request specific quantities of storage. This follows the same resource model used by PVCs.
+ To learn how to configure `storage`, please visit the links below:
+ - https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims
+
+### spec.storageType
+
+`spec.storageType` is an optional field that specifies the type of storage to use for database. It can be either `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create Kafka cluster using [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) volume.
+
+### spec.licenseSecret
+
+`spec.licenseSecret` is a mandatory fields points to a secret used to pass SingleStore license.
+
+### spec.authSecret
+
+`spec.authSecret` is an optional field that points to a Secret used to hold credentials for `singlestore` root user. If not set, the KubeDB operator creates a new Secret `{singlestore-object-name}-cred` for storing the password for `singlestore` root user for each SingleStore object. If you want to use an existing secret please specify that when creating the SingleStore object using `spec.authSecret.name`.
+
+This secret contains a `user` key and a `password` key which contains the `username` and `password` respectively for `singlestore` root user. Here, the value of `user` key is fixed to be `root`.
+
+Secrets provided by users are not managed by KubeDB, and therefore, won't be modified or garbage collected by the KubeDB operator (version 0.13.0 and higher).
+
+Example:
+
+```bash
+$ kubectl create secret generic sdb-cred -n demo \
+--from-literal=user=root \
+--from-literal=password=6q8u_2jMOW-OOZXk
+secret "sdb-cred" created
+```
+
+```yaml
+apiVersion: v1
+data:
+ password: NnE4dV8yak1PVy1PT1pYaw==
+ user: cm9vdA==
+kind: Secret
+metadata:
+ name: sdb-cred
+ namespace: demo
+type: Opaque
+```
+
+#### Initialize via Script
+
+To initialize a SingleStore database using a script (shell script, sql script, etc.), set the `spec.init.script` section when creating a SingleStore object. It will execute files alphabetically with extensions `.sh` , `.sql` and `.sql.gz` that is found in the repository. The scripts inside child folders will be skipped. script must have the following information:
+
+- [VolumeSource](https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes): Where your script is loaded from.
+
+Below is an example showing how a script from a configMap can be used to initialize a SingleStore database.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sdb
+ namespace: demo
+spec:
+ version: 8.7.10
+ init:
+ script:
+ configMap:
+ name: sdb-init-script
+ licenseSecret:
+ name: license-secret
+```
+
+In the above example, KubeDB operator will launch a Job to execute all js script of `sdb-init-script` in alphabetical order once PetSet pods are running. For more details tutorial on how to initialize from script, please visit [here](/docs/guides/mysql/initialization/index.md).
+
+### spec.monitor
+
+SingleStore managed by KubeDB can be monitored with builtin-Prometheus and Prometheus operator. To learn more,
+
+- [Monitor SingleStore with builtin Prometheus](/docs/guides/singlestore/monitoring/builtin-prometheus/index.md)
+- [Monitor SingleStore with Prometheus operator](/docs/guides/singlestore/monitoring/prometheus-operator/index.md)
+
+### spec.tls
+
+`spec.tls` specifies the TLS/SSL configurations for the SingleStore.
+
+The following fields are configurable in the `spec.tls` section:
+
+- `issuerRef` is a reference to the `Issuer` or `ClusterIssuer` CR of [cert-manager](https://cert-manager.io/docs/concepts/issuer/) that will be used by `KubeDB` to generate necessary certificates.
+
+ - `apiGroup` is the group name of the resource being referenced. The value for `Issuer` or `ClusterIssuer` is "cert-manager.io" (cert-manager v0.12.0 and later).
+ - `kind` is the type of resource being referenced. KubeDB supports both `Issuer` and `ClusterIssuer` as values for this field.
+ - `name` is the name of the resource (`Issuer` or `ClusterIssuer`) being referenced.
+
+- `certificates` (optional) are a list of certificates used to configure the server and/or client certificate. It has the following fields:
+
+ - `alias` represents the identifier of the certificate. It has the following possible value:
+ - `server` is used for server certificate identification.
+ - `client` is used for client certificate identification.
+ - `metrics-exporter` is used for metrics exporter certificate identification.
+ - `secretName` (optional) specifies the k8s secret name that holds the certificates.
+ >This field is optional. If the user does not specify this field, the default secret name will be created in the following format: `--cert`.
+ - `subject` (optional) specifies an `X.509` distinguished name. It has the following possible field,
+ - `organizations` (optional) are the list of different organization names to be used on the Certificate.
+ - `organizationalUnits` (optional) are the list of different organization unit name to be used on the Certificate.
+ - `countries` (optional) are the list of country names to be used on the Certificate.
+ - `localities` (optional) are the list of locality names to be used on the Certificate.
+ - `provinces` (optional) are the list of province names to be used on the Certificate.
+ - `streetAddresses` (optional) are the list of a street address to be used on the Certificate.
+ - `postalCodes` (optional) are the list of postal code to be used on the Certificate.
+ - `serialNumber` (optional) is a serial number to be used on the Certificate.
+ You can found more details from [Here](https://golang.org/pkg/crypto/x509/pkix/#Name)
+
+ - `duration` (optional) is the period during which the certificate is valid.
+ - `renewBefore` (optional) is a specifiable time before expiration duration.
+ - `dnsNames` (optional) is a list of subject alt names to be used in the Certificate.
+ - `ipAddresses` (optional) is a list of IP addresses to be used in the Certificate.
+ - `uriSANs` (optional) is a list of URI Subject Alternative Names to be set in the Certificate.
+ - `emailSANs` (optional) is a list of email Subject Alternative Names to be set in the Certificate.
+
+### spec.serviceTemplates
+
+You can also provide template for the services created by KubeDB operator for Kafka cluster through `spec.serviceTemplates`. This will allow you to set the type and other properties of the services.
+
+KubeDB allows following fields to set in `spec.serviceTemplates`:
+- `alias` represents the identifier of the service. It has the following possible value:
+ - `stats` is used for the exporter service identification.
+- metadata:
+ - labels
+ - annotations
+- spec:
+ - type
+ - ports
+ - clusterIP
+ - externalIPs
+ - loadBalancerIP
+ - loadBalancerSourceRanges
+ - externalTrafficPolicy
+ - healthCheckNodePort
+ - sessionAffinityConfig
+
+See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.21.1/api/v1/types.go#L237) to understand these fields in detail.
+
+### spec.deletionPolicy
+
+`deletionPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `singlestore` crd or which resources KubeDB should keep or delete when you delete `singlestore` crd. KubeDB provides following four deletion policies:
+
+- DoNotTerminate
+- WipeOut
+- Halt
+- Delete
+
+When `deletionPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, `DoNotTerminate` prevents users from deleting the database as long as the `spec.deletionPolicy` is set to `DoNotTerminate`.
+
+Following table show what KubeDB does when you delete MySQL crd for different termination policies,
+
+| Behavior | DoNotTerminate | Halt | Delete | WipeOut |
+|---------------------------| :------------: | :------: | :------: | :------: |
+| 1. Block Delete operation | ✓ | ✗ | ✗ | ✗ |
+| 2. Delete PetSet | ✗ | ✓ | ✓ | ✓ |
+| 3. Delete Services | ✗ | ✓ | ✓ | ✓ |
+| 4. Delete PVCs | ✗ | ✗ | ✓ | ✓ |
+| 5. Delete Secrets | ✗ | ✗ | ✗ | ✓ |
+| 6. Delete Snapshots | ✗ | ✗ | ✗ | ✓ |
+
+If you don't specify `spec.deletionPolicy` KubeDB uses `Delete` termination policy by default.
+
+## spec.healthChecker
+It defines the attributes for the health checker.
+- `spec.healthChecker.periodSeconds` specifies how often to perform the health check.
+- `spec.healthChecker.timeoutSeconds` specifies the number of seconds after which the probe times out.
+- `spec.healthChecker.failureThreshold` specifies minimum consecutive failures for the healthChecker to be considered failed.
+- `spec.healthChecker.disableWriteCheck` specifies whether to disable the writeCheck or not.
+
+## Next Steps
+
+- Learn how to use KubeDB to run a SingleStore database [here](/docs/guides/singlestore/README.md).
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
\ No newline at end of file
diff --git a/docs/guides/singlestore/configuration/_index.md b/docs/guides/singlestore/configuration/_index.md
new file mode 100755
index 0000000000..6580eb5067
--- /dev/null
+++ b/docs/guides/singlestore/configuration/_index.md
@@ -0,0 +1,10 @@
+---
+title: Run SingleStore with Custom Configuration
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-configuration
+ name: Custom Configuration
+ parent: guides-singlestore
+ weight: 30
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/singlestore/configuration/config-file/index.md b/docs/guides/singlestore/configuration/config-file/index.md
new file mode 100644
index 0000000000..6106d9ef4c
--- /dev/null
+++ b/docs/guides/singlestore/configuration/config-file/index.md
@@ -0,0 +1,247 @@
+---
+title: Run SingleStore with Custom Configuration
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-configuration-using-config-file
+ name: Config File
+ parent: guides-sdb-configuration
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Using Custom Configuration File
+
+KubeDB supports providing custom configuration for SingleStore. This tutorial will show you how to use KubeDB to run a SingleStore database with custom configuration.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial.
+
+ ```bash
+ $ kubectl create ns demo
+ namespace/demo created
+
+ $ kubectl get ns demo
+ NAME STATUS AGE
+ demo Active 5s
+ ```
+
+> Note: YAML files used in this tutorial are stored in [docs/guides/singlestore/configuration/config-file/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/singlestore/configuration/config-file/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
+
+## Overview
+
+SingleStore allows to configure database via configuration file. The default configuration for SingleStore can be found in `/var/lib/memsql/instance/memsql.cnf` file. When SingleStore starts, it will look for custom configuration file in `/etc/memsql/conf.d` directory. If configuration file exist, SingleStore instance will use combined startup setting from both `/var/lib/memsql/instance/memsql.cnf` and `*.cnf` files in `/etc/memsql/conf.d` directory. This custom configuration will overwrite the existing default one. To know more about configuring SingleStore see [here](https://docs.singlestore.com/db/v8.7/reference/configuration-reference/cluster-config-files/singlestore-server-config-files/).
+
+At first, you have to create a config file with `.cnf` extension with your desired configuration. Then you have to put this file into a [volume](https://kubernetes.io/docs/concepts/storage/volumes/). You need to specify this volume in the `spec.configSecret` section when creating the SingleStore CRD for `Standalone` mode. Additionally, you can modify your `aggregator` and `leaf` nodes separately by providing separate configSecrets, or use the same one in the `spec.topology.aggregator.configSecret` and `spec.topology.leaf.configSecret` sections when creating the SingleStore CRD for `Cluster` mode. KubeDB will mount this volume into `/etc/memsql/conf.d` directory of the database pod.
+
+In this tutorial, we will configure [max_connections](https://docs.singlestore.com/db/v8.7/reference/configuration-reference/engine-variables/list-of-engine-variables/#in-depth-variable-definitions) and [read_buffer_size](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_read_buffer_size) via a custom config file. We will use configMap as volume source.
+
+## Create SingleStore License Secret
+
+We need SingleStore License to create SingleStore Database. So, Ensure that you have acquired a license and then simply pass the license by secret.
+
+```bash
+$ kubectl create secret generic -n demo license-secret \
+ --from-literal=username=license \
+ --from-literal=password='your-license-set-here'
+secret/license-secret created
+```
+
+## Custom Configuration
+
+At first, let's create `sdb-config.cnf` file setting `max_connections` and `read_buffer_size` parameters.
+
+```bash
+cat < sdb-config.cnf
+[server]
+max_connections = 250
+read_buffer_size = 1048576
+EOF
+
+$ cat sdb-config.cnf
+[server]
+max_connections = 250
+read_buffer_size = 122880
+```
+
+Now, create a secret with this configuration file.
+
+```bash
+$ kubectl create secret generic -n demo sdb-configuration --from-file=./sdb-config.cnf
+configmap/sdb-configuration created
+```
+
+Verify the secret has the configuration file.
+
+```yaml
+$ kubectl get secret -n demo sdb-configuration -o yaml
+apiVersion: v1
+data:
+ sdb-config.cnf: W3NlcnZlcl0KbWF4X2Nvbm5lY3Rpb25zID0gMjUwCnJlYWRfYnVmZmVyX3NpemUgPSAxMjI4ODAK
+kind: Secret
+metadata:
+ creationTimestamp: "2024-10-02T12:54:35Z"
+ name: sdb-configuration
+ namespace: demo
+ resourceVersion: "99627"
+ uid: c2621d8e-ebca-4300-af05-0180512ce031
+type: Opaque
+
+
+```
+
+Now, create SingleStore crd specifying `spec.topology.aggregator.configSecret` and `spec.topology.leaf.configSecret` field.
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/configuration/config-file/yamls/sdb-custom.yaml
+singlestore.kubedb.com/custom-sdb created
+```
+
+Below is the YAML for the SingleStore crd we just created.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: custom-sdb
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 2
+ configSecret:
+ name: sdb-configuration
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ configSecret:
+ name: sdb-configuration
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
+```
+
+Now, wait a few minutes. KubeDB operator will create necessary PVC, petset, services, secret etc.
+
+Check that the petset's pod is running
+
+```bash
+$ kubectl get pod -n demo
+NAME READY STATUS RESTARTS AGE
+custom-sdb-aggregator-0 2/2 Running 0 94s
+custom-sdb-aggregator-1 2/2 Running 0 88s
+custom-sdb-leaf-0 2/2 Running 0 91s
+custom-sdb-leaf-1 2/2 Running 0 86s
+
+$ kubectl get sdb -n demo
+NAME TYPE VERSION STATUS AGE
+custom-sdb kubedb.com/v1alpha2 8.7.10 Ready 4m29s
+
+```
+
+We can see the database is in ready phase so it can accept conncetion.
+
+Now, we will check if the database has started with the custom configuration we have provided.
+
+> Read the comment written for the following commands. They contain the instructions and explanations of the commands.
+
+```bash
+# Connceting to the database
+$ kubectl exec -it -n demo custom-sdb-aggregator-0 -- bash
+Defaulted container "singlestore" out of: singlestore, singlestore-coordinator, singlestore-init (init)
+[memsql@custom-sdb-aggregator-0 /]$ memsql -uroot -p$ROOT_PASSWORD
+singlestore-client: [Warning] Using a password on the command line interface can be insecure.
+Welcome to the MySQL monitor. Commands end with ; or \g.
+Your MySQL connection id is 208
+Server version: 5.7.32 SingleStoreDB source distribution (compatible; MySQL Enterprise & MySQL Commercial)
+
+Copyright (c) 2000, 2022, Oracle and/or its affiliates.
+
+Oracle is a registered trademark of Oracle Corporation and/or its
+affiliates. Other names may be trademarks of their respective
+owners.
+
+Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
+
+# value of `max_conncetions` is same as provided
+singlestore> show variables like 'max_connections';
++-----------------+-------+
+| Variable_name | Value |
++-----------------+-------+
+| max_connections | 250 |
++-----------------+-------+
+1 row in set (0.00 sec)
+
+# value of `read_buffer_size` is same as provided
+singlestore> show variables like 'read_buffer_size';
++------------------+--------+
+| Variable_name | Value |
++------------------+--------+
+| read_buffer_size | 122880 |
++------------------+--------+
+1 row in set (0.00 sec)
+
+singlestore> exit
+Bye
+
+```
+## Cleaning up
+
+To cleanup the Kubernetes resources created by this tutorial, run:
+
+```bash
+kubectl patch -n demo my/custom-sdb -p '{"spec":{"deletionPolicy":"WipeOut"}}' --type="merge"
+kubectl delete -n demo my/custom-sdb
+kubectl delete ns demo
+```
+
+If you would like to uninstall KubeDB operator, please follow the steps [here](/docs/setup/README.md).
+
+## Next Steps
+- [Quickstart SingleStore](/docs/guides/singlestore/quickstart/quickstart.md) with KubeDB Operator.
+- Detail concepts of [singlestore object](/docs/guides/singlestore/concepts/singlestore.md).
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
diff --git a/docs/guides/singlestore/configuration/config-file/yamls/sdb-config.cnf b/docs/guides/singlestore/configuration/config-file/yamls/sdb-config.cnf
new file mode 100644
index 0000000000..f2adc327aa
--- /dev/null
+++ b/docs/guides/singlestore/configuration/config-file/yamls/sdb-config.cnf
@@ -0,0 +1,3 @@
+[server]
+max_connections = 250
+read_buffer_size = 122880
diff --git a/docs/guides/singlestore/configuration/config-file/yamls/sdb-custom.yaml b/docs/guides/singlestore/configuration/config-file/yamls/sdb-custom.yaml
new file mode 100644
index 0000000000..dee95b8ba4
--- /dev/null
+++ b/docs/guides/singlestore/configuration/config-file/yamls/sdb-custom.yaml
@@ -0,0 +1,57 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: custom-sdb
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 2
+ configSecret:
+ name: sdb-configuration
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ configSecret:
+ name: sdb-configuration
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
+
diff --git a/docs/guides/singlestore/configuration/podtemplating/index.md b/docs/guides/singlestore/configuration/podtemplating/index.md
new file mode 100644
index 0000000000..66fd5c8111
--- /dev/null
+++ b/docs/guides/singlestore/configuration/podtemplating/index.md
@@ -0,0 +1,712 @@
+---
+title: Run SingleStore with Custom PodTemplate
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-configuration-using-podtemplate
+ name: Customize PodTemplate
+ parent: guides-sdb-configuration
+ weight: 15
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Run SingleStore with Custom PodTemplate
+
+KubeDB supports providing custom configuration for SingleStore via [PodTemplate](/docs/guides/singlestore/concepts/singlestore.md#spec.topology). This tutorial will show you how to use KubeDB to run a SingleStore database with custom configuration using PodTemplate.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial.
+
+ ```bash
+ $ kubectl create ns demo
+ namespace/demo created
+ ```
+
+> Note: YAML files used in this tutorial are stored in [docs/guides/singlestore/configuration/podtemplating/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/singlestore/configuration/podtemplating/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
+
+## Overview
+
+KubeDB allows providing a template for `leaf` and `aggregator` pod through `spec.topology.aggregator.podTemplate` and `spec.topology.leaf.podTemplate`. KubeDB operator will pass the information provided in `spec.topology.aggregator.podTemplate` and `spec.topology.leaf.podTemplate` to the `aggregator` and `leaf` PetSet created for SingleStore database.
+
+KubeDB accept following fields to set in `spec.podTemplate:`
+
+- metadata:
+ - annotations (pod's annotation)
+ - labels (pod's labels)
+- controller:
+ - annotations (statefulset's annotation)
+ - labels (statefulset's labels)
+- spec:
+ - volumes
+ - initContainers
+ - containers
+ - imagePullSecrets
+ - nodeSelector
+ - affinity
+ - serviceAccountName
+ - schedulerName
+ - tolerations
+ - priorityClassName
+ - priority
+ - securityContext
+ - livenessProbe
+ - readinessProbe
+ - lifecycle
+
+Read about the fields in details in [PodTemplate concept](/docs/guides/singlestore/concepts/singlestore.md#spectopology),
+
+## Create SingleStore License Secret
+
+We need SingleStore License to create SingleStore Database. So, Ensure that you have acquired a license and then simply pass the license by secret.
+
+```bash
+$ kubectl create secret generic -n demo license-secret \
+ --from-literal=username=license \
+ --from-literal=password='your-license-set-here'
+secret/license-secret created
+```
+
+## CRD Configuration
+
+Below is the YAML for the SingleStore created in this example. Here, [`spec.topology.aggregator/leaf.podTemplate.spec.args`](/docs/guides/mysql/concepts/database/index.md#specpodtemplatespecargs) provides extra arguments.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sdb-misc-config
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ args:
+ - --character-set-server=utf8mb4
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ args:
+ - --character-set-server=utf8mb4
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
+```
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/configuration/podtemplating/yamls/sdb-misc-config.yaml
+singlestore.kubedb.com/sdb-misc-config created
+```
+
+Now, wait a few minutes. KubeDB operator will create necessary PVC, petset, services, secret etc. If everything goes well, we will see that a pod with the name `sdb-misc-config-aggregator-0` has been created.
+
+Check that the petset's pod is running
+
+```bash
+$ kubectl get pod -n demo
+NAME READY STATUS RESTARTS AGE
+sdb-misc-config-aggregator-0 2/2 Running 0 4m51s
+sdb-misc-config-leaf-0 2/2 Running 0 4m48s
+sdb-misc-config-leaf-1 2/2 Running 0 4m30s
+```
+
+Now, we will check if the database has started with the custom configuration we have provided.
+
+```bash
+$ kubectl exec -it -n demo sdb-misc-config-aggregator-0 -- bash
+Defaulted container "singlestore" out of: singlestore, singlestore-coordinator, singlestore-init (init)
+[memsql@sdb-misc-config-aggregator-0 /]$ memsql -uroot -p$ROOT_PASSWORD
+singlestore-client: [Warning] Using a password on the command line interface can be insecure.
+Welcome to the MySQL monitor. Commands end with ; or \g.
+Your MySQL connection id is 311
+Server version: 5.7.32 SingleStoreDB source distribution (compatible; MySQL Enterprise & MySQL Commercial)
+
+Copyright (c) 2000, 2022, Oracle and/or its affiliates.
+
+Oracle is a registered trademark of Oracle Corporation and/or its
+affiliates. Other names may be trademarks of their respective
+owners.
+
+Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
+
+singlestore> SHOW VARIABLES LIKE 'char%';
++--------------------------+------------------------------------------------------+
+| Variable_name | Value |
++--------------------------+------------------------------------------------------+
+| character_set_client | utf8mb4 |
+| character_set_connection | utf8mb4 |
+| character_set_database | utf8mb4 |
+| character_set_filesystem | binary |
+| character_set_results | utf8mb4 |
+| character_set_server | utf8mb4 |
+| character_set_system | utf8 |
+| character_sets_dir | /opt/memsql-server-8.7.10-95e2357384/share/charsets/ |
++--------------------------+------------------------------------------------------+
+8 rows in set (0.00 sec)
+
+singlestore> exit
+Bye
+
+```
+
+Here we can see the character_set_server value is utf8mb4.
+
+## Custom Sidecar Containers
+
+Here in this example we will add an extra sidecar container with our SingleStore cluster. This below example configuration allows you to run a SingleStore instance alongside a simple Nginx sidecar container, which can be used for HTTP requests, logging, or as a reverse proxy. Adjust the configuration as needed to fit your application's architecture.
+
+Firstly, we are going to create a sample configmap for the nginx configuration. Here is the yaml of ConfigMap
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: nginx-config-map
+ namespace: demo
+data:
+ default.conf: |
+ server {
+ listen 80;
+ location / {
+ proxy_pass http://localhost:9000;
+ }
+ }
+```
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/configuration/podtemplating/yamls/nginx-config-map.yaml
+configmap/nginx-config-map created
+```
+
+Now we will deploy our singlestore with custom sidecar container. Here is the yaml of singlestore,
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sdb-custom-sidecar
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ - name: sidecar
+ image: nginx:alpine
+ ports:
+ - containerPort: 80
+ volumeMounts:
+ - name: nginx-config
+ mountPath: /etc/nginx/conf.d
+ volumes:
+ - name: nginx-config
+ configMap:
+ name: nginx-config-map
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
+```
+
+Here,
+
+- Primary Container: The main singlestore container runs the SingleStore database, configured with specific resource limits and requests.
+
+- Sidecar Container: The sidecar container runs Nginx, a lightweight web server. It's configured to listen on port 80 and is intended to proxy requests to the SingleStore database.
+
+- Volume Mounts: The sidecar container mounts a volume for Nginx configuration from a ConfigMap, which allows you to customize Nginx's behavior.
+
+- Volumes: A volume is defined to link the ConfigMap nginx-config-map to the Nginx configuration directory.
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/configuration/podtemplating/yamls/sdb-custom-sidecar.yaml
+singlestore.kubedb.com/sdb-custom-sidecar created
+```
+
+Now, wait a few minutes. KubeDB operator will create necessary petset, services, secret etc. If everything goes well, we will see the pods has been created.
+
+Check that the petset's pod is running
+
+```bash
+$ kubectl get pods -n demo
+NAME READY STATUS RESTARTS AGE
+sdb-custom-sidecar-aggregator-0 3/3 Running 0 3m17s
+sdb-custom-sidecar-leaf-0 2/2 Running 0 3m14s
+sdb-custom-sidecar-leaf-1 2/2 Running 0 2m59s
+```
+
+Now check the logs of sidecar container,
+
+```bash
+$ kubectl logs -f -n demo sdb-custom-sidecar-aggregator-0 -c sidecar
+/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
+/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
+/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
+10-listen-on-ipv6-by-default.sh: info: can not modify /etc/nginx/conf.d/default.conf (read-only file system?)
+/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
+/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
+/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
+/docker-entrypoint.sh: Configuration complete; ready for start up
+2024/10/29 07:43:11 [notice] 1#1: using the "epoll" event method
+2024/10/29 07:43:11 [notice] 1#1: nginx/1.27.2
+2024/10/29 07:43:11 [notice] 1#1: built by gcc 13.2.1 20240309 (Alpine 13.2.1_git20240309)
+2024/10/29 07:43:11 [notice] 1#1: OS: Linux 6.8.0-47-generic
+2024/10/29 07:43:11 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
+2024/10/29 07:43:11 [notice] 1#1: start worker processes
+2024/10/29 07:43:11 [notice] 1#1: start worker process 21
+2024/10/29 07:43:11 [notice] 1#1: start worker process 22
+2024/10/29 07:43:11 [notice] 1#1: start worker process 23
+2024/10/29 07:43:11 [notice] 1#1: start worker process 24
+2024/10/29 07:43:11 [notice] 1#1: start worker process 25
+2024/10/29 07:43:11 [notice] 1#1: start worker process 26
+2024/10/29 07:43:11 [notice] 1#1: start worker process 27
+2024/10/29 07:43:11 [notice] 1#1: start worker process 28
+2024/10/29 07:43:11 [notice] 1#1: start worker process 29
+2024/10/29 07:43:11 [notice] 1#1: start worker process 30
+2024/10/29 07:43:11 [notice] 1#1: start worker process 31
+2024/10/29 07:43:11 [notice] 1#1: start worker process 32
+```
+So, we have successfully deploy sidecar container in KubeDB manage SingleStore.
+
+## Using Node Selector
+
+Here in this example we will use [node selector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) to schedule our singlestore pod to a specific node. Applying nodeSelector to the Pod involves several steps. We first need to assign a label to some node that will be later used by the `nodeSelector` . Let’s find what nodes exist in your cluster. To get the name of these nodes, you can run:
+
+```bash
+$ kubectl get nodes --show-labels
+NAME STATUS ROLES AGE VERSION LABELS
+lke212553-307295-339173d10000 Ready 36m v1.30.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=g6-dedicated-4,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=ap-south,kubernetes.io/arch=amd64,kubernetes.io/hostname=lke212553-307295-339173d10000,kubernetes.io/os=linux,lke.linode.com/pool-id=307295,node.k8s.linode.com/host-uuid=618158120a299c6fd37f00d01d355ca18794c467,node.kubernetes.io/instance-type=g6-dedicated-4,topology.kubernetes.io/region=ap-south,topology.linode.com/region=ap-south
+lke212553-307295-5541798e0000 Ready 36m v1.30.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=g6-dedicated-4,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=ap-south,kubernetes.io/arch=amd64,kubernetes.io/hostname=lke212553-307295-5541798e0000,kubernetes.io/os=linux,lke.linode.com/pool-id=307295,node.k8s.linode.com/host-uuid=75cfe3dbbb0380f1727efc53f5192897485e95d5,node.kubernetes.io/instance-type=g6-dedicated-4,topology.kubernetes.io/region=ap-south,topology.linode.com/region=ap-south
+lke212553-307295-5b53c5520000 Ready 36m v1.30.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=g6-dedicated-4,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=ap-south,kubernetes.io/arch=amd64,kubernetes.io/hostname=lke212553-307295-5b53c5520000,kubernetes.io/os=linux,lke.linode.com/pool-id=307295,node.k8s.linode.com/host-uuid=792bac078d7ce0e548163b9423416d7d8c88b08f,node.kubernetes.io/instance-type=g6-dedicated-4,topology.kubernetes.io/region=ap-south,topology.linode.com/region=ap-south
+```
+As you see, we have three nodes in the cluster: lke212553-307295-339173d10000, lke212553-307295-5541798e0000, and lke212553-307295-5b53c5520000.
+
+Next, select a node to which you want to add a label. For example, let’s say we want to add a new label with the key `disktype` and value ssd to the `lke212553-307295-5541798e0000` node, which is a node with the SSD storage. To do so, run:
+```bash
+$ kubectl label nodes lke212553-307295-5541798e0000 disktype=ssd
+node/lke212553-307295-5541798e0000 labeled
+```
+As you noticed, the command above follows the format `kubectl label nodes =` .
+Finally, let’s verify that the new label was added by running:
+```bash
+ $ kubectl get nodes --show-labels
+NAME STATUS ROLES AGE VERSION LABELS
+lke212553-307295-339173d10000 Ready 41m v1.30.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=g6-dedicated-4,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=ap-south,kubernetes.io/arch=amd64,kubernetes.io/hostname=lke212553-307295-339173d10000,kubernetes.io/os=linux,lke.linode.com/pool-id=307295,node.k8s.linode.com/host-uuid=618158120a299c6fd37f00d01d355ca18794c467,node.kubernetes.io/instance-type=g6-dedicated-4,topology.kubernetes.io/region=ap-south,topology.linode.com/region=ap-south
+lke212553-307295-5541798e0000 Ready 41m v1.30.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=g6-dedicated-4,beta.kubernetes.io/os=linux,disktype=ssd,failure-domain.beta.kubernetes.io/region=ap-south,kubernetes.io/arch=amd64,kubernetes.io/hostname=lke212553-307295-5541798e0000,kubernetes.io/os=linux,lke.linode.com/pool-id=307295,node.k8s.linode.com/host-uuid=75cfe3dbbb0380f1727efc53f5192897485e95d5,node.kubernetes.io/instance-type=g6-dedicated-4,topology.kubernetes.io/region=ap-south,topology.linode.com/region=ap-south
+lke212553-307295-5b53c5520000 Ready 41m v1.30.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=g6-dedicated-4,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=ap-south,kubernetes.io/arch=amd64,kubernetes.io/hostname=lke212553-307295-5b53c5520000,kubernetes.io/os=linux,lke.linode.com/pool-id=307295,node.k8s.linode.com/host-uuid=792bac078d7ce0e548163b9423416d7d8c88b08f,node.kubernetes.io/instance-type=g6-dedicated-4,topology.kubernetes.io/region=ap-south,topology.linode.com/region=ap-south
+```
+As you see, the lke212553-307295-5541798e0000 now has a new label disktype=ssd. To see all labels attached to the node, you can also run:
+```bash
+$ kubectl describe node "lke212553-307295-5541798e0000"
+Name: lke212553-307295-5541798e0000
+Roles:
+Labels: beta.kubernetes.io/arch=amd64
+ beta.kubernetes.io/instance-type=g6-dedicated-4
+ beta.kubernetes.io/os=linux
+ disktype=ssd
+ failure-domain.beta.kubernetes.io/region=ap-south
+ kubernetes.io/arch=amd64
+ kubernetes.io/hostname=lke212553-307295-5541798e0000
+ kubernetes.io/os=linux
+ lke.linode.com/pool-id=307295
+ node.k8s.linode.com/host-uuid=75cfe3dbbb0380f1727efc53f5192897485e95d5
+ node.kubernetes.io/instance-type=g6-dedicated-4
+ topology.kubernetes.io/region=ap-south
+ topology.linode.com/region=ap-south
+```
+Along with the `disktype=ssd` label we’ve just added, you can see other labels such as `beta.kubernetes.io/arch` or `kubernetes.io/hostname`. These are all default labels attached to Kubernetes nodes.
+
+Now let's create a singlestore with this new label as nodeSelector. Below is the yaml we are going to apply:
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sdb-node-selector
+ namespace: demo
+spec:
+ version: "8.7.10"
+ podTemplate:
+ spec:
+ nodeSelector:
+ disktype: ssd
+ deletionPolicy: WipeOut
+ licenseSecret:
+ name: license-secret
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ storageType: Durable
+```
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/configuration/podtemplating/yamls/sdb-node-selector.yaml
+singlestore.kubedb.com/sdb-node-selector created
+```
+Now, wait a few minutes. KubeDB operator will create necessary petset, services, secret etc. If everything goes well, we will see that a pod with the name `sdb-node-selector-0` has been created.
+
+Check that the petset's pod is running
+
+```bash
+$ kubectl get pods -n demo
+NAME READY STATUS RESTARTS AGE
+sdb-node-selector-0 1/1 Running 0 60s
+```
+As we see the pod is running, you can verify that by running `kubectl get pods -n demo sdb-node-selector-0 -o wide` and looking at the “NODE” to which the Pod was assigned.
+```bash
+$ kubectl get pods -n demo sdb-node-selector-0 -o wide
+NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
+sdb-node-selector-0 1/1 Running 0 3m19s 10.2.1.7 lke212553-307295-5541798e0000
+```
+We can successfully verify that our pod was scheduled to our desired node.
+
+## Using Taints and Tolerations
+
+Here in this example we will use [Taints and Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) to schedule our singlestore pod to a specific node and also prevent from scheduling to nodes. Applying taints and tolerations to the Pod involves several steps. Let’s find what nodes exist in your cluster. To get the name of these nodes, you can run:
+
+```bash
+$ kubectl get nodes --show-labels
+NAME STATUS ROLES AGE VERSION LABELS
+lke212553-307295-339173d10000 Ready 36m v1.30.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=g6-dedicated-4,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=ap-south,kubernetes.io/arch=amd64,kubernetes.io/hostname=lke212553-307295-339173d10000,kubernetes.io/os=linux,lke.linode.com/pool-id=307295,node.k8s.linode.com/host-uuid=618158120a299c6fd37f00d01d355ca18794c467,node.kubernetes.io/instance-type=g6-dedicated-4,topology.kubernetes.io/region=ap-south,topology.linode.com/region=ap-south
+lke212553-307295-5541798e0000 Ready 36m v1.30.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=g6-dedicated-4,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=ap-south,kubernetes.io/arch=amd64,kubernetes.io/hostname=lke212553-307295-5541798e0000,kubernetes.io/os=linux,lke.linode.com/pool-id=307295,node.k8s.linode.com/host-uuid=75cfe3dbbb0380f1727efc53f5192897485e95d5,node.kubernetes.io/instance-type=g6-dedicated-4,topology.kubernetes.io/region=ap-south,topology.linode.com/region=ap-south
+lke212553-307295-5b53c5520000 Ready 36m v1.30.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=g6-dedicated-4,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=ap-south,kubernetes.io/arch=amd64,kubernetes.io/hostname=lke212553-307295-5b53c5520000,kubernetes.io/os=linux,lke.linode.com/pool-id=307295,node.k8s.linode.com/host-uuid=792bac078d7ce0e548163b9423416d7d8c88b08f,node.kubernetes.io/instance-type=g6-dedicated-4,topology.kubernetes.io/region=ap-south,topology.linode.com/region=ap-south
+```
+As you see, we have three nodes in the cluster: lke212553-307295-339173d10000, lke212553-307295-5541798e0000, and lke212553-307295-5b53c5520000.
+
+Next, we are going to taint these nodes.
+```bash
+$ kubectl taint nodes lke212553-307295-339173d10000 key1=node1:NoSchedule
+node/lke212553-307295-339173d10000 tainted
+
+$ kubectl taint nodes lke212553-307295-5541798e0000 key1=node2:NoSchedule
+node/lke212553-307295-5541798e0000 tainted
+
+$ kubectl taint nodes lke212553-307295-5b53c5520000 key1=node3:NoSchedule
+node/lke212553-307295-5b53c5520000 tainted
+```
+Let's see our tainted nodes here,
+```bash
+$ kubectl get nodes -o json | jq -r '.items[] | select(.spec.taints != null) | .metadata.name, .spec.taints'
+lke212553-307295-339173d10000
+[
+ {
+ "effect": "NoSchedule",
+ "key": "key1",
+ "value": "node1"
+ }
+]
+lke212553-307295-5541798e0000
+[
+ {
+ "effect": "NoSchedule",
+ "key": "key1",
+ "value": "node2"
+ }
+]
+lke212553-307295-5b53c5520000
+[
+ {
+ "effect": "NoSchedule",
+ "key": "key1",
+ "value": "node3"
+ }
+]
+```
+We can see that our taints were successfully assigned. Now let's try to create a singlestore without proper tolerations. Here is the yaml of singlestore we are going to createc
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sdb-without-tolerations
+ namespace: demo
+spec:
+ deletionPolicy: WipeOut
+ licenseSecret:
+ name: license-secret
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ storageType: Durable
+ version: 8.7.10
+```
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/configuration/podtemplating/yamls/sdb-without-tolerations.yaml
+singlestore.kubedb.com/sdb-without-tolerations created
+```
+Now, wait a few minutes. KubeDB operator will create necessary petset, services, secret etc. If everything goes well, we will see that a pod with the name `sdb-without-tolerations-0` has been created and running.
+
+Check that the petset's pod is running or not,
+```bash
+$ kubectl get pods -n demo
+NAME READY STATUS RESTARTS AGE
+sdb-without-tolerations-0 0/1 Pending 0 3m35s
+```
+Here we can see that the pod is not running. So let's describe the pod,
+```bash
+$ kubectl describe pods -n demo sdb-without-tolerations-0
+Name: sdb-without-tolerations-0
+Namespace: demo
+Priority: 0
+Service Account: sdb-without-tolerations
+Node: ashraful/192.168.0.227
+Start Time: Tue, 29 Oct 2024 15:44:22 +0600
+Labels: app.kubernetes.io/component=database
+ app.kubernetes.io/instance=sdb-without-tolerations
+ app.kubernetes.io/managed-by=kubedb.com
+ app.kubernetes.io/name=singlestores.kubedb.com
+ apps.kubernetes.io/pod-index=0
+ controller-revision-hash=sdb-without-tolerations-6449dc959b
+ kubedb.com/petset=standalone
+ statefulset.kubernetes.io/pod-name=sdb-without-tolerations-0
+Annotations:
+Status: Running
+IP: 10.42.0.122
+IPs:
+ IP: 10.42.0.122
+Controlled By: PetSet/sdb-without-tolerations
+Init Containers:
+ singlestore-init:
+ Container ID: containerd://382a8cca4103e609c0a763f65db11e89ca38fe4b982dd6f03c18eb33c083998c
+ Image: ghcr.io/kubedb/singlestore-init:8.7.10-v1@sha256:7f8a60b45c9a402c5a3de56a266e06a70db1feeff1c28a506e485e60afc7f5fa
+ Image ID: ghcr.io/kubedb/singlestore-init@sha256:7f8a60b45c9a402c5a3de56a266e06a70db1feeff1c28a506e485e60afc7f5fa
+ Port:
+ Host Port:
+ SeccompProfile: RuntimeDefault
+ State: Terminated
+ Reason: Completed
+ Exit Code: 0
+ Started: Tue, 29 Oct 2024 15:44:31 +0600
+ Finished: Tue, 29 Oct 2024 15:44:31 +0600
+ Ready: True
+ Restart Count: 0
+ Limits:
+ memory: 512Mi
+ Requests:
+ cpu: 200m
+ memory: 512Mi
+ Environment:
+ Mounts:
+ /scripts from init-scripts (rw)
+ /var/lib/memsql from data (rw)
+ /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-htm2z (ro)
+Containers:
+ singlestore:
+ Container ID: containerd://b52ae6c34300ea23b60ce91fbbc6a01a1fd71bb7a3de6fea97d9a726ca280e55
+ Image: singlestore/cluster-in-a-box:alma-8.7.10-95e2357384-4.1.0-1.17.14@sha256:6b1b66b57e11814815a43114ab28db407428662af4c7d1c666c14a3f53c5289f
+ Image ID: docker.io/singlestore/cluster-in-a-box@sha256:6b1b66b57e11814815a43114ab28db407428662af4c7d1c666c14a3f53c5289f
+ Ports: 3306/TCP, 8081/TCP
+ Host Ports: 0/TCP, 0/TCP
+ SeccompProfile: RuntimeDefault
+ Args:
+ /scripts/standalone-run.sh
+ State: Running
+ Started: Tue, 29 Oct 2024 15:44:32 +0600
+ Ready: True
+ Restart Count: 0
+ Limits:
+ memory: 2Gi
+ Requests:
+ cpu: 500m
+ memory: 2Gi
+ Environment:
+ ROOT_USERNAME: Optional: false
+ ROOT_PASSWORD: Optional: false
+ SINGLESTORE_LICENSE: Optional: false
+ LICENSE_KEY: Optional: false
+ HOST_IP: (v1:status.hostIP)
+ Mounts:
+ /scripts from init-scripts (rw)
+ /var/lib/memsql from data (rw)
+ /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-htm2z (ro)
+Conditions:
+ Type Status
+ PodReadyToStartContainers True
+ Initialized True
+ Ready True
+ ContainersReady True
+ PodScheduled True
+Volumes:
+ data:
+ Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
+ ClaimName: data-sdb-without-tolerations-0
+ ReadOnly: false
+ init-scripts:
+ Type: EmptyDir (a temporary directory that shares a pod's lifetime)
+ Medium:
+ SizeLimit:
+ kube-api-access-htm2z:
+ Type: Projected (a volume that contains injected data from multiple sources)
+ TokenExpirationSeconds: 3607
+ ConfigMapName: kube-root-ca.crt
+ ConfigMapOptional:
+ DownwardAPI: true
+QoS Class: Burstable
+Node-Selectors:
+Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
+ node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
+Topology Spread Constraints: kubernetes.io/hostname:ScheduleAnyway when max skew 1 is exceeded for selector app.kubernetes.io/component=database,app.kubernetes.io/instance=sdb-without-tolerations,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=singlestores.kubedb.com,kubedb.com/petset=standalone
+ topology.kubernetes.io/zone:ScheduleAnyway when max skew 1 is exceeded for selector app.kubernetes.io/component=database,app.kubernetes.io/instance=sdb-without-tolerations,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=singlestores.kubedb.com,kubedb.com/petset=standalone
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Warning FailedScheduling 5m20s default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {key1: node1}, 1 node(s) had untolerated taint {key1: node2}, 1 node(s) had untolerated taint {key1: node3}. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
+ Warning FailedScheduling 11s default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {key1: node1}, 1 node(s) had untolerated taint {key1: node2}, 1 node(s) had untolerated taint {key1: node3}. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
+ Normal NotTriggerScaleUp 13s (x31 over 5m15s) cluster-autoscaler pod didn't trigger scale-up:
+```
+Here we can see that the pod has no tolerations for the tainted nodes and because of that the pod is not able to scheduled.
+
+So, let's add proper tolerations and create another singlestore. Here is the yaml we are going to apply,
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sdb-with-tolerations
+ namespace: demo
+spec:
+ podTemplate:
+ spec:
+ tolerations:
+ - key: "key1"
+ operator: "Equal"
+ value: "node1"
+ effect: "NoSchedule"
+ deletionPolicy: WipeOut
+ licenseSecret:
+ name: license-secret
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ storageType: Durable
+ version: 8.7.10
+```
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/configuration/podtemplating/yamls/sdb-with-tolerations.yaml
+singlestore.kubedb.com/sdb-with-tolerations created
+```
+Now, wait a few minutes. KubeDB operator will create necessary petset, services, secret etc. If everything goes well, we will see that a pod with the name `sdb-with-tolerations-0` has been created.
+
+Check that the petset's pod is running
+
+```bash
+$ kubectl get pods -n demo
+NAME READY STATUS RESTARTS AGE
+sdb-with-tolerations-0 1/1 Running 0 2m
+```
+As we see the pod is running, you can verify that by running `kubectl get pods -n demo sdb-with-tolerations-0 -o wide` and looking at the “NODE” to which the Pod was assigned.
+```bash
+$ kubectl get pods -n demo sdb-with-tolerations-0 -o wide
+NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
+sdb-with-tolerations-0 1/1 Running 0 3m49s 10.2.0.8 lke212553-307295-339173d10000
+```
+We can successfully verify that our pod was scheduled to the node which it has tolerations.
+
+## Cleaning up
+
+To cleanup the Kubernetes resources created by this tutorial, run:
+
+```bash
+kubectl delete singlestore -n demo sdb-misc-config
+
+kubectl delete ns demo
+```
+
+If you would like to uninstall KubeDB operator, please follow the steps [here](/docs/setup/README.md).
+
+## Next Steps
+
+- [Quickstart SingleStore](/docs/guides/singlestore/quickstart/quickstart.md) with KubeDB Operator.
+- Initialize [SingleStore with Script](/docs/guides/singlestore/initialization).
+- Detail concepts of [SingleStore object](/docs/guides/singlestore/concepts/singlestore.md).
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
diff --git a/docs/guides/singlestore/configuration/podtemplating/yamls/nginx-config-map.yaml b/docs/guides/singlestore/configuration/podtemplating/yamls/nginx-config-map.yaml
new file mode 100644
index 0000000000..aa854d0815
--- /dev/null
+++ b/docs/guides/singlestore/configuration/podtemplating/yamls/nginx-config-map.yaml
@@ -0,0 +1,13 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: nginx-config-map
+ namespace: demo
+data:
+ default.conf: |
+ server {
+ listen 80;
+ location / {
+ proxy_pass http://localhost:9000;
+ }
+ }
\ No newline at end of file
diff --git a/docs/guides/singlestore/configuration/podtemplating/yamls/sdb-custom-sidecar.yaml b/docs/guides/singlestore/configuration/podtemplating/yamls/sdb-custom-sidecar.yaml
new file mode 100644
index 0000000000..ea3ee78e0e
--- /dev/null
+++ b/docs/guides/singlestore/configuration/podtemplating/yamls/sdb-custom-sidecar.yaml
@@ -0,0 +1,63 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sdb-custom-sidecar
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ - name: sidecar
+ image: nginx:alpine
+ ports:
+ - containerPort: 80
+ volumeMounts:
+ - name: nginx-config
+ mountPath: /etc/nginx/conf.d
+ volumes:
+ - name: nginx-config
+ configMap:
+ name: nginx-config-map
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
\ No newline at end of file
diff --git a/docs/guides/singlestore/configuration/podtemplating/yamls/sdb-misc-config.yaml b/docs/guides/singlestore/configuration/podtemplating/yamls/sdb-misc-config.yaml
new file mode 100644
index 0000000000..afbe5678ca
--- /dev/null
+++ b/docs/guides/singlestore/configuration/podtemplating/yamls/sdb-misc-config.yaml
@@ -0,0 +1,57 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sdb-misc-config
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ args:
+ - --character-set-server=utf8mb4
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ args:
+ - --character-set-server=utf8mb4
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
+
diff --git a/docs/guides/singlestore/configuration/podtemplating/yamls/sdb-node-selector.yaml b/docs/guides/singlestore/configuration/podtemplating/yamls/sdb-node-selector.yaml
new file mode 100644
index 0000000000..27c460006e
--- /dev/null
+++ b/docs/guides/singlestore/configuration/podtemplating/yamls/sdb-node-selector.yaml
@@ -0,0 +1,22 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sdb-node-selector
+ namespace: demo
+spec:
+ version: "8.7.10"
+ podTemplate:
+ spec:
+ nodeSelector:
+ disktype: ssd
+ deletionPolicy: WipeOut
+ licenseSecret:
+ name: license-secret
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ storageType: Durable
\ No newline at end of file
diff --git a/docs/guides/singlestore/configuration/podtemplating/yamls/sdb-with-tolerations.yaml b/docs/guides/singlestore/configuration/podtemplating/yamls/sdb-with-tolerations.yaml
new file mode 100644
index 0000000000..2e5fac4866
--- /dev/null
+++ b/docs/guides/singlestore/configuration/podtemplating/yamls/sdb-with-tolerations.yaml
@@ -0,0 +1,25 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sdb-with-tolerations
+ namespace: demo
+spec:
+ podTemplate:
+ spec:
+ tolerations:
+ - key: "key1"
+ operator: "Equal"
+ value: "node1"
+ effect: "NoSchedule"
+ deletionPolicy: WipeOut
+ licenseSecret:
+ name: license-secret
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ storageType: Durable
+ version: 8.7.10
\ No newline at end of file
diff --git a/docs/guides/singlestore/configuration/podtemplating/yamls/sdb-without-tolerations.yaml b/docs/guides/singlestore/configuration/podtemplating/yamls/sdb-without-tolerations.yaml
new file mode 100644
index 0000000000..3e55318404
--- /dev/null
+++ b/docs/guides/singlestore/configuration/podtemplating/yamls/sdb-without-tolerations.yaml
@@ -0,0 +1,18 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sdb-without-tolerations
+ namespace: demo
+spec:
+ deletionPolicy: WipeOut
+ licenseSecret:
+ name: license-secret
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ storageType: Durable
+ version: 8.7.10
\ No newline at end of file
diff --git a/docs/guides/singlestore/initialization/_index.md b/docs/guides/singlestore/initialization/_index.md
new file mode 100755
index 0000000000..6d2bb64fa7
--- /dev/null
+++ b/docs/guides/singlestore/initialization/_index.md
@@ -0,0 +1,10 @@
+---
+title: SingleStore Initialization
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-initialization
+ name: Initialization
+ parent: guides-singlestore
+ weight: 41
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/singlestore/initialization/using-script/example/demo-1.yaml b/docs/guides/singlestore/initialization/using-script/example/demo-1.yaml
new file mode 100644
index 0000000000..cce1b506ae
--- /dev/null
+++ b/docs/guides/singlestore/initialization/using-script/example/demo-1.yaml
@@ -0,0 +1,56 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sdb-sample
+ namespace: demo
+spec:
+ version: "8.7.10"
+ init:
+ script:
+ configMap:
+ name: sdb-init-script
+ topology:
+ aggregator:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
\ No newline at end of file
diff --git a/docs/guides/singlestore/initialization/using-script/index.md b/docs/guides/singlestore/initialization/using-script/index.md
new file mode 100644
index 0000000000..b5bef8142b
--- /dev/null
+++ b/docs/guides/singlestore/initialization/using-script/index.md
@@ -0,0 +1,406 @@
+---
+title: Initialize SingleStore using Script
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-initialization-usingscript
+ name: Using Script
+ parent: guides-sdb-initialization
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Initialize SingleStore using Script
+
+This tutorial will show you how to use KubeDB to initialize a SingleStore database with \*.sql, \*.sh and/or \*.sql.gz script.
+In this tutorial we will use .sql script stored in GitHub repository [kubedb/singlestore-init-scripts](https://github.com/kubedb/singlestore-init-scripts).
+
+> Note: The yaml files that are used in this tutorial are stored [here](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/singlestore/initialization/using-script/example) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs)
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial.
+
+```bash
+$ kubectl create ns demo
+namespace/demo created
+```
+
+## Prepare Initialization Scripts
+
+SingleStore supports initialization with `.sh`, `.sql` and `.sql.gz` files. In this tutorial, we will use `init.sql` script from [singlestore-init-scripts](https://github.com/kubedb/singlestore-init-scripts) git repository to create a TABLE `kubedb_write_check` in `kubedb_test` database.
+
+We will use a ConfigMap as script source. You can use any Kubernetes supported [volume](https://kubernetes.io/docs/concepts/storage/volumes) as script source.
+
+At first, we will create a ConfigMap from `init.sql` file. Then, we will provide this ConfigMap as script source in `init.script` of SingleStore crd spec.
+
+Let's create a ConfigMap with initialization script,
+
+```bash
+$ kubectl create configmap -n demo sdb-init-script \
+--from-literal=init.sql="$(curl -fsSL https://github.com/kubedb/singlestore-init-scripts/raw/master/init.sql)"
+configmap/sdb-init-script created
+```
+
+## Create SingleStore License Secret
+
+We need SingleStore License to create SingleStore Database. So, Ensure that you have acquired a license and then simply pass the license by secret.
+
+```bash
+$ kubectl create secret generic -n demo license-secret \
+ --from-literal=username=license \
+ --from-literal=password='your-license-set-here'
+secret/license-secret created
+```
+
+## Create a SingleStore database with Init-Script
+
+Below is the `SingleStore` object created in this tutorial.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sdb-sample
+ namespace: demo
+spec:
+ version: "8.7.10"
+ init:
+ script:
+ configMap:
+ name: sdb-init-script
+ topology:
+ aggregator:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
+```
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/singlestore/Initialization/demo-1.yaml
+singlestore.kubedb.com/singlestore-init-script created
+```
+
+Here,
+
+- `spec.init.script` specifies a script source used to initialize the database before database server starts. The scripts will be executed alphabatically. In this tutorial, a sample .sql script from the git repository `https://github.com/kubedb/singlestore-init-scripts.git` is used to create a test database. You can use other [volume sources](https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes) instead of `ConfigMap`. The \*.sql, \*sql.gz and/or \*.sh sripts that are stored inside the root folder will be executed alphabatically. The scripts inside child folders will be skipped.
+
+KubeDB operator watches for `SingleStore` objects using Kubernetes api. When a `SingleStore` object is created, KubeDB operator will create a new PetSet and a Service with the matching `SingleStore` object name. KubeDB operator will also create a governing service for PetSets with the name `kubedb`, if one is not already present. No SingleStore specific RBAC roles are required for [RBAC enabled clusters](/docs/setup/README.md#using-yaml).
+
+```yaml
+$ kubectl get sdb -n demo sdb-sample -oyaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ annotations:
+ kubectl.kubernetes.io/last-applied-configuration: |
+ {"apiVersion":"kubedb.com/v1alpha2","kind":"Singlestore","metadata":{"annotations":{},"name":"sdb-sample","namespace":"demo"},"spec":{"deletionPolicy":"WipeOut","init":{"script":{"configMap":{"name":"sdb-init-script"}}},"licenseSecret":{"name":"license-secret"},"storageType":"Durable","topology":{"aggregator":{"podTemplate":{"spec":{"containers":[{"name":"singlestore","resources":{"limits":{"cpu":"600m","memory":"2Gi"},"requests":{"cpu":"600m","memory":"2Gi"}}}]}},"replicas":2,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"}},"leaf":{"podTemplate":{"spec":{"containers":[{"name":"singlestore","resources":{"limits":{"cpu":"600m","memory":"2Gi"},"requests":{"cpu":"600m","memory":"2Gi"}}}]}},"replicas":2,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"10Gi"}},"storageClassName":"standard"}}},"version":"8.7.10"}}
+ creationTimestamp: "2024-10-03T07:00:56Z"
+ finalizers:
+ - kubedb.com
+ generation: 3
+ name: sdb-sample
+ namespace: demo
+ resourceVersion: "124012"
+ uid: ccfe9d0e-6f13-4187-b652-4e157a21568e
+spec:
+ authSecret:
+ name: sdb-sample-root-cred
+ deletionPolicy: WipeOut
+ healthChecker:
+ failureThreshold: 1
+ periodSeconds: 10
+ timeoutSeconds: 10
+ init:
+ script:
+ configMap:
+ name: sdb-init-script
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ topology:
+ aggregator:
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ cpu: 600m
+ memory: 2Gi
+ requests:
+ cpu: 600m
+ memory: 2Gi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ runAsGroup: 998
+ runAsNonRoot: true
+ runAsUser: 999
+ seccompProfile:
+ type: RuntimeDefault
+ - name: singlestore-coordinator
+ resources:
+ limits:
+ memory: 256Mi
+ requests:
+ cpu: 200m
+ memory: 256Mi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ runAsGroup: 998
+ runAsNonRoot: true
+ runAsUser: 999
+ seccompProfile:
+ type: RuntimeDefault
+ initContainers:
+ - name: singlestore-init
+ resources:
+ limits:
+ memory: 512Mi
+ requests:
+ cpu: 200m
+ memory: 512Mi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ runAsGroup: 998
+ runAsNonRoot: true
+ runAsUser: 999
+ seccompProfile:
+ type: RuntimeDefault
+ podPlacementPolicy:
+ name: default
+ securityContext:
+ fsGroup: 999
+ replicas: 2
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ storageClassName: standard
+ leaf:
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ cpu: 600m
+ memory: 2Gi
+ requests:
+ cpu: 600m
+ memory: 2Gi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ runAsGroup: 998
+ runAsNonRoot: true
+ runAsUser: 999
+ seccompProfile:
+ type: RuntimeDefault
+ - name: singlestore-coordinator
+ resources:
+ limits:
+ memory: 256Mi
+ requests:
+ cpu: 200m
+ memory: 256Mi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ runAsGroup: 998
+ runAsNonRoot: true
+ runAsUser: 999
+ seccompProfile:
+ type: RuntimeDefault
+ initContainers:
+ - name: singlestore-init
+ resources:
+ limits:
+ memory: 512Mi
+ requests:
+ cpu: 200m
+ memory: 512Mi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ runAsGroup: 998
+ runAsNonRoot: true
+ runAsUser: 999
+ seccompProfile:
+ type: RuntimeDefault
+ podPlacementPolicy:
+ name: default
+ securityContext:
+ fsGroup: 999
+ replicas: 2
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ storageClassName: standard
+ version: 8.7.10
+status:
+ conditions:
+ - lastTransitionTime: "2024-10-03T07:01:02Z"
+ message: 'The KubeDB operator has started the provisioning of Singlestore: demo/sdb-sample'
+ observedGeneration: 3
+ reason: DatabaseProvisioningStartedSuccessfully
+ status: "True"
+ type: ProvisioningStarted
+ - lastTransitionTime: "2024-10-03T07:11:23Z"
+ message: All leaf replicas are ready for Singlestore demo/sdb-sample
+ observedGeneration: 3
+ reason: AllReplicasReady
+ status: "True"
+ type: ReplicaReady
+ - lastTransitionTime: "2024-10-03T07:02:13Z"
+ message: database demo/sdb-sample is accepting connection
+ observedGeneration: 3
+ reason: AcceptingConnection
+ status: "True"
+ type: AcceptingConnection
+ - lastTransitionTime: "2024-10-03T07:02:13Z"
+ message: database demo/sdb-sample is ready
+ observedGeneration: 3
+ reason: AllReplicasReady
+ status: "True"
+ type: Ready
+ - lastTransitionTime: "2024-10-03T07:02:14Z"
+ message: 'The Singlestore: demo/sdb-sample is successfully provisioned.'
+ observedGeneration: 3
+ reason: DatabaseSuccessfullyProvisioned
+ status: "True"
+ type: Provisioned
+ phase: Ready
+```
+
+KubeDB operator sets the `status.phase` to `Ready` once the database is successfully created.
+
+Now, we will connect to this database and check the data inserted by the initlization script.
+
+```bash
+# Connecting to the database
+$ kubectl exec -it -n demo sdb-sample-aggregator-0 -- bash
+Defaulted container "singlestore" out of: singlestore, singlestore-coordinator, singlestore-init (init)
+[memsql@sdb-sample-aggregator-0 /]$ memsql -uroot -p$ROOT_PASSWORD
+singlestore-client: [Warning] Using a password on the command line interface can be insecure.
+Welcome to the MySQL monitor. Commands end with ; or \g.
+Your MySQL connection id is 144
+Server version: 5.7.32 SingleStoreDB source distribution (compatible; MySQL Enterprise & MySQL Commercial)
+
+Copyright (c) 2000, 2022, Oracle and/or its affiliates.
+
+Oracle is a registered trademark of Oracle Corporation and/or its
+affiliates. Other names may be trademarks of their respective
+owners.
+
+Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
+
+singlestore> show databases;
++--------------------+
+| Database |
++--------------------+
+| cluster |
+| information_schema |
+| kubedb_test |
+| memsql |
+| singlestore_health |
++--------------------+
+5 rows in set (0.00 sec)
+
+singlestore> use kubedb_test;
+Reading table information for completion of table and column names
+You can turn off this feature to get a quicker startup with -A
+
+Database changed
+
+# Showing the inserted `kubedb_write_check`
+singlestore> select * from kubedb_write_check;
++----+-------+
+| id | name |
++----+-------+
+| 3 | name3 |
+| 1 | name1 |
+| 2 | name2 |
++----+-------+
+3 rows in set (0.02 sec)
+
+singlestore> exit
+Bye
+
+
+```
+
+## Cleaning up
+
+To cleanup the Kubernetes resources created by this tutorial, run:
+
+```bash
+$ kubectl delete sdb -n demo sdb-sample
+singlestore.kubedb.com "sdb-sample" deleted
+$ kubectl delete ns demo
+namespace "demo" deleted
+```
diff --git a/docs/guides/singlestore/monitoring/_index.md b/docs/guides/singlestore/monitoring/_index.md
new file mode 100644
index 0000000000..1053b439d3
--- /dev/null
+++ b/docs/guides/singlestore/monitoring/_index.md
@@ -0,0 +1,10 @@
+---
+title: SingleStore Monitoring
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-monitoring
+ name: Monitoring
+ parent: guides-singlestore
+ weight: 50
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/singlestore/monitoring/builtin-prometheus/images/sdb-builtin-prom-target.png b/docs/guides/singlestore/monitoring/builtin-prometheus/images/sdb-builtin-prom-target.png
new file mode 100644
index 0000000000..d574dd37aa
Binary files /dev/null and b/docs/guides/singlestore/monitoring/builtin-prometheus/images/sdb-builtin-prom-target.png differ
diff --git a/docs/guides/singlestore/monitoring/builtin-prometheus/index.md b/docs/guides/singlestore/monitoring/builtin-prometheus/index.md
new file mode 100644
index 0000000000..da5a7440fa
--- /dev/null
+++ b/docs/guides/singlestore/monitoring/builtin-prometheus/index.md
@@ -0,0 +1,402 @@
+---
+title: Monitor SingleStore using Builtin Prometheus Discovery
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-monitoring-builtin-prometheus
+ name: Builtin Prometheus
+ parent: guides-sdb-monitoring
+ weight: 20
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Monitoring SingleStore with builtin Prometheus
+
+This tutorial will show you how to monitor SingleStore database using builtin [Prometheus](https://github.com/prometheus/prometheus) scraper.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Install KubeDB operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- If you are not familiar with how to configure Prometheus to scrape metrics from various Kubernetes resources, please read the tutorial from [here](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin).
+
+- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/guides/singlestore/monitoring/overview/index.md).
+
+- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy respective monitoring resources. We are going to deploy database in `demo` namespace.
+
+ ```bash
+ $ kubectl create ns monitoring
+ namespace/monitoring created
+
+ $ kubectl create ns demo
+ namespace/demo created
+ ```
+
+> Note: YAML files used in this tutorial are stored in [docs/guides/singlestore/monitoring/builtin-prometheus/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/singlestore/monitoring/builtin-prometheus/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
+
+## Deploy SingleStore with Monitoring Enabled
+
+At first, let's deploy an SingleStore database with monitoring enabled. Below is the SingleStore object that we are going to create.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: builtin-prom-sdb
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
+ monitor:
+ agent: prometheus.io/builtin
+```
+
+Here,
+
+- `spec.monitor.agent: prometheus.io/builtin` specifies that we are going to monitor this server using builtin Prometheus scraper.
+
+Let's create the SingleStore crd we have shown above.
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/monitoring/builtin-prometheus/yamls/builtin-prom-singlestore.yaml
+singlestore.kubedb.com/builtin-prom-sdb created
+```
+
+Now, wait for the database to go into `Running` state.
+
+```bash
+$ watch -n 3 kubectl get singlestore -n demo builtin-prom-sdb
+
+NAME TYPE VERSION STATUS AGE
+builtin-prom-sdb kubedb.com/v1alpha2 8.7.10 Ready 9m5s
+
+```
+
+KubeDB will create a separate stats service with name `{SingleStore crd name}-stats` for monitoring purpose.
+
+```bash
+$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=builtin-prom-sdb"
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+builtin-prom-sdb ClusterIP 10.128.102.243 3306/TCP,8081/TCP 14m
+builtin-prom-sdb-pods ClusterIP None 3306/TCP 14m
+builtin-prom-sdb-stats ClusterIP 10.128.218.225 9104/TCP 14m
+
+```
+
+Here, `builtin-prom-sdb-stats` service has been created for monitoring purpose. Let's describe the service.
+
+```bash
+$ kubectl describe svc -n demo builtin-prom-sdb-stats
+Name: builtin-prom-sdb-stats
+Namespace: demo
+Labels: app.kubernetes.io/component=database
+ app.kubernetes.io/instance=builtin-prom-sdb
+ app.kubernetes.io/managed-by=kubedb.com
+ app.kubernetes.io/name=singlestores.kubedb.com
+ kubedb.com/role=stats
+Annotations: monitoring.appscode.com/agent: prometheus.io/builtin
+ prometheus.io/path: /metrics
+ prometheus.io/port: 9104
+ prometheus.io/scrape: true
+Selector: app.kubernetes.io/instance=builtin-prom-sdb,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=singlestores.kubedb.com
+Type: ClusterIP
+IP Family Policy: SingleStack
+IP Families: IPv4
+IP: 10.128.218.225
+IPs: 10.128.218.225
+Port: metrics 9104/TCP
+TargetPort: metrics/TCP
+Endpoints: 10.2.1.142:9104,10.2.1.143:9104
+Session Affinity: None
+Events:
+```
+
+You can see that the service contains following annotations.
+
+```bash
+prometheus.io/path: /metrics
+prometheus.io/port: 56790
+prometheus.io/scrape: true
+```
+
+The Prometheus server will discover the service endpoint using these specifications and will scrape metrics from the exporter.
+
+## Configure Prometheus Server
+
+Now, we have to configure a Prometheus scraping job to scrape the metrics using this service. We are going to configure scraping job similar to this [kubernetes-service-endpoints](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin#kubernetes-service-endpoints) job that scrapes metrics from endpoints of a service.
+
+Let's configure a Prometheus scraping job to collect metrics from this service.
+
+```yaml
+- job_name: 'kubedb-databases'
+ honor_labels: true
+ scheme: http
+ kubernetes_sd_configs:
+ - role: endpoints
+ # by default Prometheus server select all Kubernetes services as possible target.
+ # relabel_config is used to filter only desired endpoints
+ relabel_configs:
+ # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" annotations
+ - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port]
+ separator: ;
+ regex: true;(.*)
+ action: keep
+ # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme.
+ - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
+ action: drop
+ regex: https
+ # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix
+ - source_labels: [__meta_kubernetes_service_name]
+ separator: ;
+ regex: (.*-stats)
+ action: keep
+ # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations.
+ - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name]
+ separator: ;
+ regex: (.*)
+ action: keep
+ # read the metric path from "prometheus.io/path: " annotation
+ - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
+ action: replace
+ target_label: __metrics_path__
+ regex: (.+)
+ # read the port from "prometheus.io/port: " annotation and update scraping address accordingly
+ - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
+ action: replace
+ target_label: __address__
+ regex: ([^:]+)(?::\d+)?;(\d+)
+ replacement: $1:$2
+ # add service namespace as label to the scraped metrics
+ - source_labels: [__meta_kubernetes_namespace]
+ separator: ;
+ regex: (.*)
+ target_label: namespace
+ replacement: $1
+ action: replace
+ # add service name as a label to the scraped metrics
+ - source_labels: [__meta_kubernetes_service_name]
+ separator: ;
+ regex: (.*)
+ target_label: service
+ replacement: $1
+ action: replace
+ # add stats service's labels to the scraped metrics
+ - action: labelmap
+ regex: __meta_kubernetes_service_label_(.+)
+```
+
+### Configure Existing Prometheus Server
+
+If you already have a Prometheus server running, you have to add above scraping job in the `ConfigMap` used to configure the Prometheus server. Then, you have to restart it for the updated configuration to take effect.
+
+>If you don't use a persistent volume for Prometheus storage, you will lose your previously scraped data on restart.
+
+### Deploy New Prometheus Server
+
+If you don't have any existing Prometheus server running, you have to deploy one. In this section, we are going to deploy a Prometheus server in `monitoring` namespace to collect metrics using this stats service.
+
+**Create ConfigMap:**
+
+At first, create a ConfigMap with the scraping configuration. Bellow, the YAML of ConfigMap that we are going to create in this tutorial.
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: prometheus-config
+ labels:
+ app: prometheus-demo
+ namespace: monitoring
+data:
+ prometheus.yml: |-
+ global:
+ scrape_interval: 5s
+ evaluation_interval: 5s
+ scrape_configs:
+ - job_name: 'kubedb-databases'
+ honor_labels: true
+ scheme: http
+ kubernetes_sd_configs:
+ - role: endpoints
+ # by default Prometheus server select all Kubernetes services as possible target.
+ # relabel_config is used to filter only desired endpoints
+ relabel_configs:
+ # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations
+ - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port]
+ separator: ;
+ regex: true;(.*)
+ action: keep
+ # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme.
+ - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
+ action: drop
+ regex: https
+ # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix
+ - source_labels: [__meta_kubernetes_service_name]
+ separator: ;
+ regex: (.*-stats)
+ action: keep
+ # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations.
+ - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name]
+ separator: ;
+ regex: (.*)
+ action: keep
+ # read the metric path from "prometheus.io/path: " annotation
+ - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
+ action: replace
+ target_label: __metrics_path__
+ regex: (.+)
+ # read the port from "prometheus.io/port: " annotation and update scraping address accordingly
+ - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
+ action: replace
+ target_label: __address__
+ regex: ([^:]+)(?::\d+)?;(\d+)
+ replacement: $1:$2
+ # add service namespace as label to the scraped metrics
+ - source_labels: [__meta_kubernetes_namespace]
+ separator: ;
+ regex: (.*)
+ target_label: namespace
+ replacement: $1
+ action: replace
+ # add service name as a label to the scraped metrics
+ - source_labels: [__meta_kubernetes_service_name]
+ separator: ;
+ regex: (.*)
+ target_label: service
+ replacement: $1
+ action: replace
+ # add stats service's labels to the scraped metrics
+ - action: labelmap
+ regex: __meta_kubernetes_service_label_(.+)
+```
+
+Let's create above `ConfigMap`,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/monitoring/builtin-prometheus/yamls/prom-config.yaml
+configmap/prometheus-config created
+```
+
+**Create RBAC:**
+
+If you are using an RBAC enabled cluster, you have to give necessary RBAC permissions for Prometheus. Let's create necessary RBAC stuffs for Prometheus,
+
+```bash
+$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/rbac.yaml
+clusterrole.rbac.authorization.k8s.io/prometheus created
+serviceaccount/prometheus created
+clusterrolebinding.rbac.authorization.k8s.io/prometheus created
+```
+
+>YAML for the RBAC resources created above can be found [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/rbac.yaml).
+
+**Deploy Prometheus:**
+
+Now, we are ready to deploy Prometheus server. We are going to use following [deployment](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/deployment.yaml) to deploy Prometheus server.
+
+Let's deploy the Prometheus server.
+
+```bash
+$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/deployment.yaml
+deployment.apps/prometheus created
+```
+
+### Verify Monitoring Metrics
+
+Prometheus server is listening to port `9090`. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard.
+
+At first, let's check if the Prometheus pod is in `Running` state.
+
+```bash
+$ kubectl get pod -n monitoring -l=app=prometheus
+NAME READY STATUS RESTARTS AGE
+prometheus-8568c86d86-95zhn 1/1 Running 0 77s
+```
+
+Now, run following command on a separate terminal to forward 9090 port of `prometheus-8568c86d86-95zhn` pod,
+
+```bash
+$ kubectl port-forward -n monitoring prometheus-8568c86d86-95zhn 9090
+Forwarding from 127.0.0.1:9090 -> 9090
+Forwarding from [::1]:9090 -> 9090
+```
+
+Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see the endpoint of `builtin-prom-sdb-stats` service as one of the targets.
+
+
+
+
+
+Check the labels marked with red rectangle. These labels confirm that the metrics are coming from `SingleStore` database `builtin-prom-sdb` through stats service `builtin-prom-sdb-stats`.
+
+Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics.
+
+## Cleaning up
+
+To cleanup the Kubernetes resources created by this tutorial, run following commands
+
+```bash
+kubectl delete -n demo my/builtin-prom-sdb
+
+kubectl delete -n monitoring deployment.apps/prometheus
+
+kubectl delete -n monitoring clusterrole.rbac.authorization.k8s.io/prometheus
+kubectl delete -n monitoring serviceaccount/prometheus
+kubectl delete -n monitoring clusterrolebinding.rbac.authorization.k8s.io/prometheus
+
+kubectl delete ns demo
+kubectl delete ns monitoring
+```
+
+## Next Steps
+
+- Monitor your SingleStore database with KubeDB using [`out-of-the-box` Prometheus operator](/docs/guides/singlestore/monitoring/prometheus-operator/index.md).
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
diff --git a/docs/guides/singlestore/monitoring/builtin-prometheus/yamls/builtin-prom-singlestore.yaml b/docs/guides/singlestore/monitoring/builtin-prometheus/yamls/builtin-prom-singlestore.yaml
new file mode 100644
index 0000000000..2891cd6fa6
--- /dev/null
+++ b/docs/guides/singlestore/monitoring/builtin-prometheus/yamls/builtin-prom-singlestore.yaml
@@ -0,0 +1,54 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: builtin-prom-sdb
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
+ monitor:
+ agent: prometheus.io/builtin
diff --git a/docs/guides/singlestore/monitoring/builtin-prometheus/yamls/prom-config.yaml b/docs/guides/singlestore/monitoring/builtin-prometheus/yamls/prom-config.yaml
new file mode 100644
index 0000000000..45aee6317a
--- /dev/null
+++ b/docs/guides/singlestore/monitoring/builtin-prometheus/yamls/prom-config.yaml
@@ -0,0 +1,68 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: prometheus-config
+ labels:
+ app: prometheus-demo
+ namespace: monitoring
+data:
+ prometheus.yml: |-
+ global:
+ scrape_interval: 5s
+ evaluation_interval: 5s
+ scrape_configs:
+ - job_name: 'kubedb-databases'
+ honor_labels: true
+ scheme: http
+ kubernetes_sd_configs:
+ - role: endpoints
+ # by default Prometheus server select all Kubernetes services as possible target.
+ # relabel_config is used to filter only desired endpoints
+ relabel_configs:
+ # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations
+ - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port]
+ separator: ;
+ regex: true;(.*)
+ action: keep
+ # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme.
+ - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
+ action: drop
+ regex: https
+ # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix
+ - source_labels: [__meta_kubernetes_service_name]
+ separator: ;
+ regex: (.*-stats)
+ action: keep
+ # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations.
+ - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name]
+ separator: ;
+ regex: (.*)
+ action: keep
+ # read the metric path from "prometheus.io/path: " annotation
+ - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
+ action: replace
+ target_label: __metrics_path__
+ regex: (.+)
+ # read the port from "prometheus.io/port: " annotation and update scraping address accordingly
+ - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
+ action: replace
+ target_label: __address__
+ regex: ([^:]+)(?::\d+)?;(\d+)
+ replacement: $1:$2
+ # add service namespace as label to the scraped metrics
+ - source_labels: [__meta_kubernetes_namespace]
+ separator: ;
+ regex: (.*)
+ target_label: namespace
+ replacement: $1
+ action: replace
+ # add service name as a label to the scraped metrics
+ - source_labels: [__meta_kubernetes_service_name]
+ separator: ;
+ regex: (.*)
+ target_label: service
+ replacement: $1
+ action: replace
+ # add stats service's labels to the scraped metrics
+ - action: labelmap
+ regex: __meta_kubernetes_service_label_(.+)
diff --git a/docs/guides/singlestore/monitoring/overview/images/database-monitoring-overview.svg b/docs/guides/singlestore/monitoring/overview/images/database-monitoring-overview.svg
new file mode 100644
index 0000000000..395eefb334
--- /dev/null
+++ b/docs/guides/singlestore/monitoring/overview/images/database-monitoring-overview.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/guides/singlestore/monitoring/overview/index.md b/docs/guides/singlestore/monitoring/overview/index.md
new file mode 100644
index 0000000000..7cb18f352e
--- /dev/null
+++ b/docs/guides/singlestore/monitoring/overview/index.md
@@ -0,0 +1,122 @@
+---
+title: SingleStore Monitoring Overview
+description: SingleStore Monitoring Overview
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-monitoring-overview
+ name: Overview
+ parent: guides-sdb-monitoring
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Monitoring SingleStore with KubeDB
+
+KubeDB has native support for monitoring via [Prometheus](https://prometheus.io/). You can use builtin [Prometheus](https://github.com/prometheus/prometheus) scraper or [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) to monitor KubeDB managed databases. This tutorial will show you how database monitoring works with KubeDB and how to configure Database crd to enable monitoring.
+
+## Overview
+
+KubeDB uses Prometheus [exporter](https://prometheus.io/docs/instrumenting/exporters/#databases) images to export metrics for the respective databases. However, with SingleStore, you can obtain metrics without using an exporter image by configuring monitoring using the `memsql-admin` binary. We have integrated this configuration into our operator, supporting both TLS and non-TLS setups. To enable monitoring, you simply need to specify it in your SingleStore YAML file. The following diagram illustrates the logical flow of database monitoring with KubeDB.
+
+
+
+
+
+When a user creates a database crd with `spec.monitor` section configured, KubeDB operator provisions the respective database and injects an exporter image as sidecar to the database pod. It also creates a dedicated stats service with name `{database-crd-name}-stats` for monitoring. Prometheus server can scrape metrics using this stats service.
+
+## Configure Monitoring
+
+In order to enable monitoring for a database, you have to configure `spec.monitor` section. KubeDB provides following options to configure `spec.monitor` section:
+
+| Field | Type | Uses |
+| -------------------------------------------------- | ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
+| `spec.monitor.agent` | `Required` | Type of the monitoring agent that will be used to monitor this database. It can be `prometheus.io/builtin` or `prometheus.io/operator`. |
+| `spec.monitor.prometheus.exporter.port` | `Optional` | Port number where the exporter side car will serve metrics. |
+| `spec.monitor.prometheus.exporter.args` | `Optional` | Arguments to pass to the exporter sidecar. |
+| `spec.monitor.prometheus.exporter.env` | `Optional` | List of environment variables to set in the exporter sidecar container. |
+| `spec.monitor.prometheus.exporter.resources` | `Optional` | Resources required by exporter sidecar container. |
+| `spec.monitor.prometheus.exporter.securityContext` | `Optional` | Security options the exporter should run with. |
+| `spec.monitor.prometheus.serviceMonitor.labels` | `Optional` | Labels for `ServiceMonitor` crd. |
+| `spec.monitor.prometheus.serviceMonitor.interval` | `Optional` | Interval at which metrics should be scraped. |
+
+## Sample Configuration
+
+A sample YAML for MySQL crd with `spec.monitor` section configured to enable monitoring with [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) is shown below.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: prom-operator-sdb
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
+ monitor:
+ agent: prometheus.io/operator
+ prometheus:
+ serviceMonitor:
+ labels:
+ release: prometheus
+ interval: 10s
+```
+
+Assume that above Redis is configured to use basic authentication. So, exporter image also need to provide password to collect metrics. We have provided it through `spec.monitor.args` field.
+
+Here, we have specified that we are going to monitor this server using Prometheus operator through `spec.monitor.agent: prometheus.io/operator`. KubeDB will create a `ServiceMonitor` crd in `monitoring` namespace and this `ServiceMonitor` will have `release: prometheus` label.
+
+## Next Steps
+
+- Learn how to monitor Elasticsearch database with KubeDB using [builtin-Prometheus](/docs/guides/elasticsearch/monitoring/using-builtin-prometheus.md) and using [Prometheus operator](/docs/guides/elasticsearch/monitoring/using-prometheus-operator.md).
+- Learn how to monitor PostgreSQL database with KubeDB using [builtin-Prometheus](/docs/guides/postgres/monitoring/using-builtin-prometheus.md) and using [Prometheus operator](/docs/guides/postgres/monitoring/using-prometheus-operator.md).
+- Learn how to monitor MySQL database with KubeDB using [builtin-Prometheus](/docs/guides/mysql/monitoring/builtin-prometheus/index.md) and using [Prometheus operator](/docs/guides/mysql/monitoring/prometheus-operator/index.md).
+- Learn how to monitor MongoDB database with KubeDB using [builtin-Prometheus](/docs/guides/mongodb/monitoring/using-builtin-prometheus.md) and using [Prometheus operator](/docs/guides/mongodb/monitoring/using-prometheus-operator.md).
+- Learn how to monitor Redis server with KubeDB using [builtin-Prometheus](/docs/guides/redis/monitoring/using-builtin-prometheus.md) and using [Prometheus operator](/docs/guides/redis/monitoring/using-prometheus-operator.md).
+- Learn how to monitor Memcached server with KubeDB using [builtin-Prometheus](/docs/guides/memcached/monitoring/using-builtin-prometheus.md) and using [Prometheus operator](/docs/guides/memcached/monitoring/using-prometheus-operator.md).
diff --git a/docs/guides/singlestore/monitoring/prometheus-operator/images/prom-operator-sdb-target.png b/docs/guides/singlestore/monitoring/prometheus-operator/images/prom-operator-sdb-target.png
new file mode 100644
index 0000000000..d3c6416ec9
Binary files /dev/null and b/docs/guides/singlestore/monitoring/prometheus-operator/images/prom-operator-sdb-target.png differ
diff --git a/docs/guides/singlestore/monitoring/prometheus-operator/images/prometheus-operator.png b/docs/guides/singlestore/monitoring/prometheus-operator/images/prometheus-operator.png
new file mode 100644
index 0000000000..a9719d7f15
Binary files /dev/null and b/docs/guides/singlestore/monitoring/prometheus-operator/images/prometheus-operator.png differ
diff --git a/docs/guides/singlestore/monitoring/prometheus-operator/index.md b/docs/guides/singlestore/monitoring/prometheus-operator/index.md
new file mode 100644
index 0000000000..e951d4fbea
--- /dev/null
+++ b/docs/guides/singlestore/monitoring/prometheus-operator/index.md
@@ -0,0 +1,361 @@
+---
+title: Monitor SingleStore using Prometheus Operator
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-monitoring-prometheus-operator
+ name: Prometheus Operator
+ parent: guides-sdb-monitoring
+ weight: 15
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Monitoring SingleStore Using Prometheus operator
+
+[Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) provides simple and Kubernetes native way to deploy and configure Prometheus server. This tutorial will show you how to use Prometheus operator to monitor SingleStore database deployed with KubeDB.
+
+The following diagram shows how KubeDB Provisioner operator monitor `SingleStore` using Prometheus Operator. Open the image in a new tab to see the enlarged version.
+
+
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/guides/singlestore/monitoring/overview/index.md).
+
+- To keep database resources isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster:
+
+ ```bash
+ $ kubectl create ns demo
+ namespace/demo created
+ ```
+
+- We need a [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) instance running. If you don't already have a running instance, deploy one following the docs from [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/operator/README.md).
+
+- If you already don't have a Prometheus server running, deploy one following tutorial from [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/operator/README.md#deploy-prometheus-server).
+
+> Note: YAML files used in this tutorial are stored in [docs/guides/singlestore/monitoring/prometheus-operator/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/singlestore/monitoring/prometheus-operator/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
+
+## Find out required labels for ServiceMonitor
+
+We need to know the labels used to select `ServiceMonitor` by a `Prometheus` crd. We are going to provide these labels in `spec.monitor.prometheus.labels` field of SingleStore crd so that KubeDB creates `ServiceMonitor` object accordingly.
+
+At first, let's find out the available Prometheus server in our cluster.
+
+```bash
+$ kubectl get prometheus --all-namespaces
+NAMESPACE NAME VERSION REPLICAS AGE
+default prometheus 1 2m19s
+```
+
+> If you don't have any Prometheus server running in your cluster, deploy one following the guide specified in **Before You Begin** section.
+
+Now, let's view the YAML of the available Prometheus server `prometheus` in `default` namespace.
+
+```yaml
+$ kubectl get prometheus -n default prometheus -o yaml
+apiVersion: monitoring.coreos.com/v1
+kind: Prometheus
+metadata:
+ annotations:
+ kubectl.kubernetes.io/last-applied-configuration: |
+ {"apiVersion":"monitoring.coreos.com/v1","kind":"Prometheus","metadata":{"annotations":{},"labels":{"prometheus":"prometheus"},"name":"prometheus","namespace":"default"},"spec":{"replicas":1,"resources":{"requests":{"memory":"400Mi"}},"serviceAccountName":"prometheus","serviceMonitorNamespaceSelector":{"matchLabels":{"prometheus":"prometheus"}},"serviceMonitorSelector":{"matchLabels":{"release":"prometheus"}}}}
+ creationTimestamp: "2020-08-25T04:02:07Z"
+ generation: 1
+ labels:
+ prometheus: prometheus
+ ...
+ manager: kubectl
+ operation: Update
+ time: "2020-08-25T04:02:07Z"
+ name: prometheus
+ namespace: default
+ resourceVersion: "2087"
+ selfLink: /apis/monitoring.coreos.com/v1/namespaces/default/prometheuses/prometheus
+ uid: 972a50cb-b751-418b-b2bc-e0ecc9232730
+spec:
+ replicas: 1
+ resources:
+ requests:
+ memory: 400Mi
+ serviceAccountName: prometheus
+ serviceMonitorNamespaceSelector:
+ matchLabels:
+ prometheus: prometheus
+ serviceMonitorSelector:
+ matchLabels:
+ release: prometheus
+```
+
+- `spec.serviceMonitorSelector` field specifies which ServiceMonitors should be included. The Above label `release: prometheus` is used to select `ServiceMonitors` by its selector. So, we are going to use this label in `spec.monitor.prometheus.labels` field of SingleStore crd.
+- `spec.serviceMonitorNamespaceSelector` field specifies that the `ServiceMonitors` can be selected outside the Prometheus namespace by Prometheus using namespace selector. The Above label `prometheus: prometheus` is used to select the namespace where the `ServiceMonitor` is created.
+
+### Add Label to database namespace
+
+KubeDB creates a `ServiceMonitor` in database namespace `demo`. We need to add label to `demo` namespace. Prometheus will select this namespace by using its `spec.serviceMonitorNamespaceSelector` field.
+
+Let's add label `prometheus: prometheus` to `demo` namespace,
+
+```bash
+$ kubectl patch namespace demo -p '{"metadata":{"labels": {"prometheus":"prometheus"}}}'
+namespace/demo patched
+```
+
+## Create SingleStore License Secret
+
+We need SingleStore License to create SingleStore Database. So, Ensure that you have acquired a license and then simply pass the license by secret.
+
+```bash
+$ kubectl create secret generic -n demo license-secret \
+ --from-literal=username=license \
+ --from-literal=password='your-license-set-here'
+secret/license-secret created
+```
+
+## Deploy SingleStore with Monitoring Enabled
+
+At first, let's deploy an SingleStore database with monitoring enabled. Below is the SingleStore object that we are going to create.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: prom-operator-sdb
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
+ monitor:
+ agent: prometheus.io/operator
+ prometheus:
+ serviceMonitor:
+ labels:
+ release: prometheus
+ interval: 10s
+```
+
+Here,
+
+- `monitor.agent: prometheus.io/operator` indicates that we are going to monitor this server using Prometheus operator.
+
+- `monitor.prometheus.labels` specifies that KubeDB should create `ServiceMonitor` with these labels.
+
+- `monitor.prometheus.interval` indicates that the Prometheus server should scrape metrics from this database with 10 seconds interval.
+
+Let's create the SingleStore object that we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/monitoring/prometheus-operator/yamls/prom-operator-singlestore.yaml
+singlestore.kubedb.com/prom-operator-sdb created
+```
+
+Now, wait for the database to go into `Running` state.
+
+```bash
+$ watch -n 3 kubectl get singlestore -n demo prom-operator-sdb
+
+NAME TYPE VERSION STATUS AGE
+prom-operator-sdb kubedb.com/v1alpha2 8.7.10 Ready 10m
+
+```
+
+KubeDB will create a separate stats service with name `{SingleStore crd name}-stats` for monitoring purpose.
+
+```bash
+$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=prom-operator-sdb"
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+prom-operator-sdb ClusterIP 10.128.249.124 3306/TCP,8081/TCP 12m
+prom-operator-sdb-pods ClusterIP None 3306/TCP 12m
+prom-operator-sdb-stats ClusterIP 10.128.25.236 9104/TCP 12m
+
+```
+
+Here, `prom-operator-sdb-stats` service has been created for monitoring purpose.
+
+Let's describe this stats service.
+
+```yaml
+$ kubectl describe svc -n demo prom-operator-sdb-stats
+Name: prom-operator-sdb-stats
+Namespace: demo
+Labels: app.kubernetes.io/component=database
+ app.kubernetes.io/instance=prom-operator-sdb
+ app.kubernetes.io/managed-by=kubedb.com
+ app.kubernetes.io/name=singlestores.kubedb.com
+ kubedb.com/role=stats
+Annotations: monitoring.appscode.com/agent: prometheus.io/operator
+Selector: app.kubernetes.io/instance=prom-operator-sdb,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=singlestores.kubedb.com
+Type: ClusterIP
+IP Family Policy: SingleStack
+IP Families: IPv4
+IP: 10.128.25.236
+IPs: 10.128.25.236
+Port: metrics 9104/TCP
+TargetPort: metrics/TCP
+Endpoints: 10.2.1.140:9104,10.2.1.141:9104
+Session Affinity: None
+Events:
+
+```
+
+Notice the `Labels` and `Port` fields. `ServiceMonitor` will use these information to target its endpoints.
+
+KubeDB will also create a `ServiceMonitor` crd in `demo` namespace that select the endpoints of `prom-operator-sdb-stats` service. Verify that the `ServiceMonitor` crd has been created.
+
+```bash
+$ kubectl get servicemonitor -n demo
+NAME AGE
+prom-operator-sdb-stats 32m
+
+```
+
+Let's verify that the `ServiceMonitor` has the label that we had specified in `spec.monitor` section of SingleStore crd.
+
+```yaml
+$ kubectl get servicemonitor -n demo prom-operator-sdb-stats -oyaml
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+ creationTimestamp: "2024-10-01T05:37:40Z"
+ generation: 1
+ labels:
+ app.kubernetes.io/component: database
+ app.kubernetes.io/instance: prom-operator-sdb
+ app.kubernetes.io/managed-by: kubedb.com
+ app.kubernetes.io/name: singlestores.kubedb.com
+ release: prometheus
+ name: prom-operator-sdb-stats
+ namespace: demo
+ ownerReferences:
+ - apiVersion: v1
+ blockOwnerDeletion: true
+ controller: true
+ kind: Service
+ name: prom-operator-sdb-stats
+ uid: 33802913-be0f-49ea-ac81-cf0136ed9fbc
+ resourceVersion: "98648"
+ uid: f26855f0-5f0e-45a6-8bf2-531d2a370377
+spec:
+ endpoints:
+ - honorLabels: true
+ interval: 10s
+ path: /metrics
+ port: metrics
+ namespaceSelector:
+ matchNames:
+ - demo
+ selector:
+ matchLabels:
+ app.kubernetes.io/component: database
+ app.kubernetes.io/instance: prom-operator-sdb
+ app.kubernetes.io/managed-by: kubedb.com
+ app.kubernetes.io/name: singlestores.kubedb.com
+ kubedb.com/role: stats
+```
+
+Notice that the `ServiceMonitor` has label `release: prometheus` that we had specified in SingleStore crd.
+
+Also notice that the `ServiceMonitor` has selector which match the labels we have seen in the `prom-operator-sdb-stats` service. It also, target the `prom-http` port that we have seen in the stats service.
+
+## Verify Monitoring Metrics
+
+At first, let's find out the respective Prometheus pod for `prometheus` Prometheus server.
+
+```bash
+$ kubectl get pod -n default -l=app=prometheus
+NAME READY STATUS RESTARTS AGE
+prometheus-prometheus-0 3/3 Running 1 121m
+```
+
+Prometheus server is listening to port `9090` of `prometheus-prometheus-0` pod. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard.
+
+Run following command on a separate terminal to forward the port 9090 of `prometheus-prometheus-0` pod,
+
+```bash
+$ kubectl port-forward -n default prometheus-prometheus-0 9090
+Forwarding from 127.0.0.1:9090 -> 9090
+Forwarding from [::1]:9090 -> 9090
+```
+
+Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see `prom-http` endpoint of `prom-operator-sdb-stats` service as one of the targets.
+
+
+
+
+
+Check the `endpoint` and `service` labels marked by red rectangle. It verifies that the target is our expected database. Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics.
+
+## Cleaning up
+
+To cleanup the Kubernetes resources created by this tutorial, run following commands
+
+```bash
+# cleanup database
+kubectl delete -n demo sdb/prom-operator-sdb
+
+# cleanup Prometheus resources if exist
+kubectl delete -f https://raw.githubusercontent.com/appscode/third-party-tools/master/monitoring/prometheus/coreos-operator/artifacts/prometheus.yaml
+kubectl delete -f https://raw.githubusercontent.com/appscode/third-party-tools/master/monitoring/prometheus/coreos-operator/artifacts/prometheus-rbac.yaml
+
+# cleanup Prometheus operator resources if exist
+kubectl delete -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.41/bundle.yaml
+
+# delete namespace
+kubectl delete ns demo
+```
+
+## Next Steps
+
+- Monitor your SingleStore database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/singlestore/monitoring/builtin-prometheus/index.md).
+- Detail concepts of [SingleStore object](/docs/guides/singlestore/concepts/singlestore.md).
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
diff --git a/docs/guides/singlestore/monitoring/prometheus-operator/yamls/prom-operator-singlestore.yaml b/docs/guides/singlestore/monitoring/prometheus-operator/yamls/prom-operator-singlestore.yaml
new file mode 100644
index 0000000000..370ddb2016
--- /dev/null
+++ b/docs/guides/singlestore/monitoring/prometheus-operator/yamls/prom-operator-singlestore.yaml
@@ -0,0 +1,59 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: prom-operator-sdb
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
+ monitor:
+ agent: prometheus.io/operator
+ prometheus:
+ serviceMonitor:
+ labels:
+ release: prometheus
+ interval: 10s
\ No newline at end of file
diff --git a/docs/guides/singlestore/quickstart/quickstart.md b/docs/guides/singlestore/quickstart/quickstart.md
index f2a1724e85..9cc52e7183 100644
--- a/docs/guides/singlestore/quickstart/quickstart.md
+++ b/docs/guides/singlestore/quickstart/quickstart.md
@@ -47,10 +47,12 @@ This tutorial will show you how to use KubeDB to run a SingleStore database.
When you have installed KubeDB, it has created `SinglestoreVersion` crd for all supported SingleStore versions. Check it by using the `kubectl get singlestoreversions` command. You can also use `sdbv` shorthand instead of `singlestoreversions`.
```bash
-$ kubectl get singlestoreversions
+ $ kubectl get singlestoreversions.catalog.kubedb.com
NAME VERSION DB_IMAGE DEPRECATED AGE
-8.1.32 8.1.32 ghcr.io/appscode-images/singlestore-node:alma-8.1.32-e3d3cde6da 72m
-8.5.7 8.5.7 ghcr.io/appscode-images/singlestore-node:alma-8.5.7-bf633c1a54 72m
+8.1.32 8.1.32 ghcr.io/appscode-images/singlestore-node:alma-8.1.32-e3d3cde6da 2d1h
+8.5.30 8.5.30 ghcr.io/appscode-images/singlestore-node:alma-8.5.30-4f46ab16a5 2d1h
+8.5.7 8.5.7 ghcr.io/appscode-images/singlestore-node:alma-8.5.7-bf633c1a54 2d1h
+8.7.10 8.7.10 ghcr.io/appscode-images/singlestore-node:alma-8.7.10-95e2357384 2d1h
```
## Create SingleStore License Secret
diff --git a/docs/guides/singlestore/reconfigure-tls/_index.md b/docs/guides/singlestore/reconfigure-tls/_index.md
new file mode 100644
index 0000000000..3a49b8c81c
--- /dev/null
+++ b/docs/guides/singlestore/reconfigure-tls/_index.md
@@ -0,0 +1,10 @@
+---
+title: Reconfigure SingleStore TLS/SSL
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-reconfigure-tls
+ name: Reconfigure TLS/SSL
+ parent: guides-singlestore
+ weight: 46
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/singlestore/reconfigure-tls/cluster/examples/issuer.yaml b/docs/guides/singlestore/reconfigure-tls/cluster/examples/issuer.yaml
new file mode 100644
index 0000000000..8ffb97a846
--- /dev/null
+++ b/docs/guides/singlestore/reconfigure-tls/cluster/examples/issuer.yaml
@@ -0,0 +1,8 @@
+apiVersion: cert-manager.io/v1
+kind: Issuer
+metadata:
+ name: sdb-issuer
+ namespace: demo
+spec:
+ ca:
+ secretName: sdb-ca
\ No newline at end of file
diff --git a/docs/guides/singlestore/reconfigure-tls/cluster/examples/sample-sdb.yaml b/docs/guides/singlestore/reconfigure-tls/cluster/examples/sample-sdb.yaml
new file mode 100644
index 0000000000..f654417867
--- /dev/null
+++ b/docs/guides/singlestore/reconfigure-tls/cluster/examples/sample-sdb.yaml
@@ -0,0 +1,51 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sample-sdb
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "700m"
+ requests:
+ memory: "2Gi"
+ cpu: "700m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "700m"
+ requests:
+ memory: "2Gi"
+ cpu: "700m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ deletionPolicy: WipeOut
\ No newline at end of file
diff --git a/docs/guides/singlestore/reconfigure-tls/cluster/examples/sdbops-add-tls.yaml b/docs/guides/singlestore/reconfigure-tls/cluster/examples/sdbops-add-tls.yaml
new file mode 100644
index 0000000000..4f8ad58246
--- /dev/null
+++ b/docs/guides/singlestore/reconfigure-tls/cluster/examples/sdbops-add-tls.yaml
@@ -0,0 +1,21 @@
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdbops-add-tls
+ namespace: demo
+spec:
+ type: ReconfigureTLS
+ databaseRef:
+ name: sample-sdb
+ tls:
+ issuerRef:
+ name: sdb-issuer
+ kind: Issuer
+ apiGroup: "cert-manager.io"
+ certificates:
+ - alias: client
+ subject:
+ organizations:
+ - singlestore
+ organizationalUnits:
+ - client
\ No newline at end of file
diff --git a/docs/guides/singlestore/reconfigure-tls/cluster/examples/sdbops-remove-tls.yaml b/docs/guides/singlestore/reconfigure-tls/cluster/examples/sdbops-remove-tls.yaml
new file mode 100644
index 0000000000..4643dbfe37
--- /dev/null
+++ b/docs/guides/singlestore/reconfigure-tls/cluster/examples/sdbops-remove-tls.yaml
@@ -0,0 +1,11 @@
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdbops-remove-tls
+ namespace: demo
+spec:
+ type: ReconfigureTLS
+ databaseRef:
+ name: sample-sdb
+ tls:
+ remove: true
\ No newline at end of file
diff --git a/docs/guides/singlestore/reconfigure-tls/cluster/examples/sdbops-rotate-tls.yaml b/docs/guides/singlestore/reconfigure-tls/cluster/examples/sdbops-rotate-tls.yaml
new file mode 100644
index 0000000000..503b4040d6
--- /dev/null
+++ b/docs/guides/singlestore/reconfigure-tls/cluster/examples/sdbops-rotate-tls.yaml
@@ -0,0 +1,11 @@
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdbops-rotate-tls
+ namespace: demo
+spec:
+ type: ReconfigureTLS
+ databaseRef:
+ name: sample-sdb
+ tls:
+ rotateCertificates: true
\ No newline at end of file
diff --git a/docs/guides/singlestore/reconfigure-tls/cluster/examples/sdbops-update-tls.yaml b/docs/guides/singlestore/reconfigure-tls/cluster/examples/sdbops-update-tls.yaml
new file mode 100644
index 0000000000..549fbd7480
--- /dev/null
+++ b/docs/guides/singlestore/reconfigure-tls/cluster/examples/sdbops-update-tls.yaml
@@ -0,0 +1,17 @@
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdbops-update-tls
+ namespace: demo
+spec:
+ type: ReconfigureTLS
+ databaseRef:
+ name: sample-sdb
+ tls:
+ certificates:
+ - alias: server
+ subject:
+ organizations:
+ - kubedb:server
+ emailAddresses:
+ - "kubedb@appscode.com"
\ No newline at end of file
diff --git a/docs/guides/singlestore/reconfigure-tls/cluster/index.md b/docs/guides/singlestore/reconfigure-tls/cluster/index.md
new file mode 100644
index 0000000000..37cc93eb74
--- /dev/null
+++ b/docs/guides/singlestore/reconfigure-tls/cluster/index.md
@@ -0,0 +1,657 @@
+---
+title: Reconfigure SingleStore TLS/SSL Encryption
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-reconfigure-tls-cluster
+ name: Reconfigure TLS/SSL Encryption
+ parent: guides-sdb-reconfigure-tls
+ weight: 20
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Reconfigure SingleStore TLS/SSL (Transport Encryption)
+
+KubeDB supports reconfigure i.e. add, remove, update and rotation of TLS/SSL certificates for existing SingleStore database via a SingleStoreOpsRequest. This tutorial will show you how to use KubeDB to reconfigure TLS/SSL encryption.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes Cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.6.0 or later to your cluster to manage your SSL/TLS certificates.
+
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial.
+
+ ```bash
+ $ kubectl create ns demo
+ namespace/demo created
+ ```
+
+## Add TLS to a SingleStore Cluster
+
+Here, We are going to create a SingleStore database without TLS and then reconfigure the database to use TLS.
+> **Note:** Steps for reconfiguring TLS of SingleStore `Standalone` is same as SingleStore `Cluster`.
+
+### Create SingleStore License Secret
+
+We need SingleStore License to create SingleStore Database. So, Ensure that you have acquired a license and then simply pass the license by secret.
+
+```bash
+$ kubectl create secret generic -n demo license-secret \
+ --from-literal=username=license \
+ --from-literal=password='your-license-set-here'
+secret/license-secret created
+```
+
+### Deploy SingleStore without TLS
+
+In this section, we are going to deploy a SingleStore Cluster database without TLS. In the next few sections we will reconfigure TLS using `SingleStoreOpsRequest` CRD. Below is the YAML of the `SingleStore` CR that we are going to create,
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sample-sdb
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "700m"
+ requests:
+ memory: "2Gi"
+ cpu: "700m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "700m"
+ requests:
+ memory: "2Gi"
+ cpu: "700m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ deletionPolicy: WipeOut
+```
+
+Let's create the `SingleStore` CR we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/reconfigure-tls/cluster/examples/sample-sdb.yaml
+singlestore.kubedb.com/sample-sdb created
+```
+
+Now, wait until `sample-sdb` has status `Ready`. i.e,
+
+```bash
+$ kubectl get sdb -n demo
+NAME TYPE VERSION STATUS AGE
+sample-sdb kubedb.com/v1alpha2 8.7.10 Ready 38m
+
+```
+
+```bash
+$ kubectl exec -it -n demo sample-sdb-aggregator-0 -- bash
+Defaulted container "singlestore" out of: singlestore, singlestore-coordinator, singlestore-init (init)
+[memsql@sample-sdb-aggregator-0 /]$ memsql -uroot -p$ROOT_PASSWORD
+singlestore-client: [Warning] Using a password on the command line interface can be insecure.
+Welcome to the MySQL monitor. Commands end with ; or \g.
+Your MySQL connection id is 1188
+Server version: 5.7.32 SingleStoreDB source distribution (compatible; MySQL Enterprise & MySQL Commercial)
+
+Copyright (c) 2000, 2022, Oracle and/or its affiliates.
+
+Oracle is a registered trademark of Oracle Corporation and/or its
+affiliates. Other names may be trademarks of their respective
+owners.
+
+Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
+
+singlestore> show variables like '%ssl%';
++---------------------------------+------------+
+| Variable_name | Value |
++---------------------------------+------------+
+| default_user_require_ssl | OFF |
+| exporter_ssl_ca | |
+| exporter_ssl_capath | |
+| exporter_ssl_cert | |
+| exporter_ssl_key | |
+| exporter_ssl_key_passphrase | [redacted] |
+| have_openssl | OFF |
+| have_ssl | OFF |
+| jwks_ssl_ca_certificate | |
+| node_replication_ssl_only | OFF |
+| openssl_version | 805306480 |
+| processlist_rpc_json_max_size | 2048 |
+| ssl_ca | |
+| ssl_capath | |
+| ssl_cert | |
+| ssl_cipher | |
+| ssl_fips_mode | OFF |
+| ssl_key | |
+| ssl_key_passphrase | [redacted] |
+| ssl_last_reload_attempt_time | |
+| ssl_last_successful_reload_time | |
++---------------------------------+------------+
+21 rows in set (0.00 sec)
+```
+
+We can verify from the above output that TLS is disabled for this database.
+
+### Create Issuer/ClusterIssuer
+
+Now, we are going to create an example `Issuer` that will be used throughout the duration of this tutorial. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. By following the below steps, we are going to create our desired issuer,
+
+- Start off by generating our ca-certificates using openssl,
+
+```bash
+$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=memsql/O=kubedb"
+Generating a RSA private key
+...........................................................................+++++
+........................................................................................................+++++
+writing new private key to './ca.key'
+```
+
+- create a secret using the certificate files we have just generated,
+
+```bash
+kubectl create secret tls sdb-ca \
+ --cert=ca.crt \
+ --key=ca.key \
+ --namespace=demo
+secret/sdb-ca created
+```
+
+Now, we are going to create an `Issuer` using the `sdb-ca` secret that hols the ca-certificate we have just created. Below is the YAML of the `Issuer` cr that we are going to create,
+
+```yaml
+apiVersion: cert-manager.io/v1
+kind: Issuer
+metadata:
+ name: sdb-issuer
+ namespace: demo
+spec:
+ ca:
+ secretName: sdb-ca
+```
+
+Let’s create the `Issuer` cr we have shown above,
+
+```bash
+kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/reconfigure-tls/cluster/examples/issuer.yaml
+issuer.cert-manager.io/sdb-issuer created
+```
+
+### Create SingleStoreOpsRequest
+
+In order to add TLS to the database, we have to create a `SingleStoreOpsRequest` CRO with our created issuer. Below is the YAML of the `SingleStoreOpsRequest` CRO that we are going to create,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdbops-add-tls
+ namespace: demo
+spec:
+ type: ReconfigureTLS
+ databaseRef:
+ name: sample-sdb
+ tls:
+ issuerRef:
+ name: sdb-issuer
+ kind: Issuer
+ apiGroup: "cert-manager.io"
+ certificates:
+ - alias: client
+ subject:
+ organizations:
+ - singlestore
+ organizationalUnits:
+ - client
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `sample-sdb` database.
+- `spec.type` specifies that we are performing `ReconfigureTLS` on our database.
+- `spec.tls.issuerRef` specifies the issuer name, kind and api group.
+- `spec.tls.certificates` specifies the certificates. You can learn more about this field from [here](/docs/guides/singlestore/concepts/singlestore.md#spectls).
+
+Let's create the `SingleStoreOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/reconfigure-tls/cluster/examples/sdbops-add-tls.yaml
+singlestoreopsrequest.ops.kubedb.com/sdbops-add-tls created
+
+```
+
+#### Verify TLS Enabled Successfully
+
+Let's wait for `SingleStoreOpsRequest` to be `Successful`. Run the following command to watch `SingleStoreOpsRequest` CRO,
+
+```bash
+$ kubectl get singlestoreopsrequest -n demo
+NAME TYPE STATUS AGE
+singlestoreopsrequest.ops.kubedb.com/sdbops-add-tls ReconfigureTLS Successful 2m45s
+
+```
+
+We can see from the above output that the `SingleStoreOpsRequest` has succeeded.
+
+Now, we are going to connect to the database for verifying the `SingleStore` server has configured with TLS/SSL encryption.
+
+Let's exec into the pod to verify TLS/SSL configuration,
+
+```bash
+$ kubectl exec -it -n demo sample-sdb-aggregator-0 -- bash
+Defaulted container "singlestore" out of: singlestore, singlestore-coordinator, singlestore-init (init)
+[memsql@sample-sdb-aggregator-0 /]$ ls etc/memsql/certs/
+ca.crt client.crt client.key server.crt server.key
+[memsql@sample-sdb-aggregator-0 /]$
+[memsql@sample-sdb-aggregator-0 /]$ memsql -uroot -p$ROOT_PASSWORD
+singlestore-client: [Warning] Using a password on the command line interface can be insecure.
+Welcome to the MySQL monitor. Commands end with ; or \g.
+Your MySQL connection id is 90
+Server version: 5.7.32 SingleStoreDB source distribution (compatible; MySQL Enterprise & MySQL Commercial)
+
+Copyright (c) 2000, 2022, Oracle and/or its affiliates.
+
+Oracle is a registered trademark of Oracle Corporation and/or its
+affiliates. Other names may be trademarks of their respective
+owners.
+
+Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
+
+singlestore> show variables like '%ssl%';
++---------------------------------+------------------------------+
+| Variable_name | Value |
++---------------------------------+------------------------------+
+| default_user_require_ssl | OFF |
+| exporter_ssl_ca | |
+| exporter_ssl_capath | |
+| exporter_ssl_cert | |
+| exporter_ssl_key | |
+| exporter_ssl_key_passphrase | [redacted] |
+| have_openssl | ON |
+| have_ssl | ON |
+| jwks_ssl_ca_certificate | |
+| node_replication_ssl_only | OFF |
+| openssl_version | 805306480 |
+| processlist_rpc_json_max_size | 2048 |
+| ssl_ca | /etc/memsql/certs/ca.crt |
+| ssl_capath | |
+| ssl_cert | /etc/memsql/certs/server.crt |
+| ssl_cipher | |
+| ssl_fips_mode | OFF |
+| ssl_key | /etc/memsql/certs/server.key |
+| ssl_key_passphrase | [redacted] |
+| ssl_last_reload_attempt_time | |
+| ssl_last_successful_reload_time | |
++---------------------------------+------------------------------+
+21 rows in set (0.00 sec)
+```
+
+We can see from the above output that, `have_ssl` is set to `ture`. So, database TLS is enabled successfully to this database.
+
+## Rotate Certificate
+
+Now we are going to rotate the certificate of this database. First let's check the current expiration date of the certificate.
+
+```bash
+$ kubectl exec -it -n demo sample-sdb-aggregator-0 -- bash
+Defaulted container "singlestore" out of: singlestore, singlestore-coordinator, singlestore-init (init)
+[memsql@sample-sdb-aggregator-0 /]$ openssl x509 -in /etc/memsql/certs/server.crt -inform PEM -enddate -nameopt RFC2253 -noout
+notAfter=Jan 6 06:56:55 2025 GMT
+
+```
+
+So, the certificate will expire on this time `Jan 6 06:56:55 2025 GMT`.
+
+### Create SingleStoreOpsRequest
+
+Now we are going to increase it using a SingleStoreOpsRequest. Below is the yaml of the ops request that we are going to create,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdbops-rotate-tls
+ namespace: demo
+spec:
+ type: ReconfigureTLS
+ databaseRef:
+ name: sample-sdb
+ tls:
+ rotateCertificates: true
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `sample-sdb` database.
+- `spec.type` specifies that we are performing `ReconfigureTLS` on our database.
+- `spec.tls.rotateCertificates` specifies that we want to rotate the certificate of this database.
+
+Let's create the `SingleStoreOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/reconfigure-tls/cluster/examples/sdbops-rotate-tls.yaml
+singlestoreopsrequest.ops.kubedb.com/sdbops-rotate-tls created
+```
+
+#### Verify Certificate Rotated Successfully
+
+Let's wait for `SingleStoreOpsRequest` to be `Successful`. Run the following command to watch `SingleStoreOpsRequest` CRO,
+
+```bash
+$ kubectl get singlestoreopsrequest -n demo
+NAME TYPE STATUS AGE
+sdbops-rotate-tls ReconfigureTLS Successful 4m14s
+
+```
+
+We can see from the above output that the `SingleStoreOpsRequest` has succeeded. Now, let's check the expiration date of the certificate.
+
+```bash
+$ kubectl exec -it -n demo sample-sdb-aggregator-0 -- bash
+Defaulted container "singlestore" out of: singlestore, singlestore-coordinator, singlestore-init (init)
+[memsql@sample-sdb-aggregator-0 /]$ openssl x509 -in /etc/memsql/certs/server.crt -inform PEM -enddate -nameopt RFC2253 -noout
+notAfter=Jan 6 07:15:47 2025 GMT
+
+```
+
+As we can see from the above output, the certificate has been rotated successfully.
+
+## Update Certificate
+
+Now, we are going to update the server certificate.
+
+- Let's describe the server certificate `sample-sdb-server-cert`
+```bash
+ $ kubectl describe certificate -n demo sample-sdb-server-cert
+Name: sample-sdb-server-cert
+Namespace: demo
+Labels: app.kubernetes.io/component=database
+ app.kubernetes.io/instance=sample-sdb
+ app.kubernetes.io/managed-by=kubedb.com
+ app.kubernetes.io/name=singlestores.kubedb.com
+Annotations:
+API Version: cert-manager.io/v1
+Kind: Certificate
+Metadata:
+ Creation Timestamp: 2024-10-08T06:56:55Z
+ Generation: 1
+ Owner References:
+ API Version: kubedb.com/v1alpha2
+ Block Owner Deletion: true
+ Controller: true
+ Kind: Singlestore
+ Name: sample-sdb
+ UID: 5e42538e-c631-4583-9f47-328742e6d938
+ Resource Version: 4965452
+ UID: 65c6936b-1bd0-413d-a96d-edf0cff17897
+Spec:
+ Common Name: sample-sdb
+ Dns Names:
+ *.sample-sdb-pods.demo.svc
+ *.sample-sdb-pods.demo.svc.cluster.local
+ *.sample-sdb.demo.svc
+ localhost
+ sample-sdb
+ sample-sdb.demo.svc
+ Ip Addresses:
+ 127.0.0.1
+ Issuer Ref:
+ Group: cert-manager.io
+ Kind: Issuer
+ Name: sdb-issuer
+ Secret Name: sample-sdb-server-cert
+ Usages:
+ digital signature
+ key encipherment
+ server auth
+ client auth
+Status:
+ Conditions:
+ Last Transition Time: 2024-10-08T06:56:56Z
+ Message: Certificate is up to date and has not expired
+ Observed Generation: 1
+ Reason: Ready
+ Status: True
+ Type: Ready
+ Not After: 2025-01-06T07:15:47Z
+ Not Before: 2024-10-08T07:15:47Z
+ Renewal Time: 2024-12-07T07:15:47Z
+ Revision: 23
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Generated 23m cert-manager-certificates-key-manager Stored new private key in temporary Secret resource "sample-sdb-server-cert-48d82"
+ Normal Requested 23m cert-manager-certificates-request-manager Created new CertificateRequest resource "sample-sdb-server-cert-msv5z"
+ Normal Issuing 23m cert-manager-certificates-trigger Issuing certificate as Secret does not exist
+ Normal Requested 7m39s cert-manager-certificates-request-manager Created new CertificateRequest resource "sample-sdb-server-cert-qpmbp"
+ Normal Requested 7m38s cert-manager-certificates-request-manager Created new CertificateRequest resource "sample-sdb-server-cert-2cldn"
+ Normal Requested 7m34s cert-manager-certificates-request-manager Created new CertificateRequest resource "sample-sdb-server-cert-qtm4z"
+ Normal Requested 7m33s cert-manager-certificates-request-manager Created new CertificateRequest resource "sample-sdb-server-cert-5tflq"
+ Normal Requested 7m29s cert-manager-certificates-request-manager Created new CertificateRequest resource "sample-sdb-server-cert-qzd6h"
+ Normal Requested 7m28s cert-manager-certificates-request-manager Created new CertificateRequest resource "sample-sdb-server-cert-q6bd7"
+ Normal Requested 7m12s cert-manager-certificates-request-manager Created new CertificateRequest resource "sample-sdb-server-cert-jd2cx"
+ Normal Requested 7m11s cert-manager-certificates-request-manager Created new CertificateRequest resource "sample-sdb-server-cert-74dr5"
+ Normal Requested 7m7s cert-manager-certificates-request-manager Created new CertificateRequest resource "sample-sdb-server-cert-4k2wf"
+ Normal Reused 5m7s (x22 over 7m39s) cert-manager-certificates-key-manager Reusing private key stored in existing Secret resource "sample-sdb-server-cert"
+ Normal Issuing 5m7s (x23 over 23m) cert-manager-certificates-issuing The certificate has been successfully issued
+ Normal Requested 5m7s (x13 over 7m6s) cert-manager-certificates-request-manager (combined from similar events): Created new CertificateRequest resource "sample-sdb-server-cert-qn8g9"
+
+```
+
+We want to add `subject` and `emailAddresses` in the spec of server sertificate.
+
+### Create SingleStoreOpsRequest
+
+Below is the YAML of the `SingleStoreOpsRequest` CRO that we are going to create ton update the server certificate,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdbops-update-tls
+ namespace: demo
+spec:
+ type: ReconfigureTLS
+ databaseRef:
+ name: sample-sdb
+ tls:
+ certificates:
+ - alias: server
+ subject:
+ organizations:
+ - kubedb:server
+ emailAddresses:
+ - "kubedb@appscode.com"
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `sample-sdb` database.
+- `spec.type` specifies that we are performing `ReconfigureTLS` on our database.
+- `spec.tls.issuerRef` specifies the issuer name, kind and api group.
+- `spec.tls.certificates` specifies the changes that we want in certificate objects.
+- `spec.tls.certificates[].alias` specifies the certificate type which is one of these: `server`, `client`.
+
+Let's create the `SingleStoreOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/reconfigure-tls/cluster/examples/sdbops-update-tls.yaml
+singlestoreopsrequest.ops.kubedb.com/sdbops-update-tls created
+
+```
+
+#### Verify certificate is updated successfully
+
+Let's wait for `SingleStoreOpsRequest` to be `Successful`. Run the following command to watch `SingleStoreOpsRequest` CRO,
+
+```bash
+$ kubectl get singlestoreopsrequest -n demo
+NAME TYPE STATUS AGE
+sdbops-update-tls ReconfigureTLS Successful 3m24s
+
+
+```
+
+We can see from the above output that the `SingleStoreOpsRequest` has succeeded.
+
+Now, Let's exec into a database node and find out the ca subject to see if it matches the one we have provided.
+
+```bash
+$ kubectl exec -it -n demo sample-sdb-aggregator-0 -- bash
+Defaulted container "singlestore" out of: singlestore, singlestore-coordinator, singlestore-init (init)
+[memsql@sample-sdb-aggregator-0 /]$ openssl x509 -in /etc/memsql/certs/server.crt -inform PEM -subject -email -nameopt RFC2253 -noout
+subject=CN=sample-sdb,O=kubedb:server
+kubedb@appscode.com
+```
+
+We can see from the above output that, the subject name and email address match with the new ca certificate that we have created. So, the issuer is changed successfully.
+
+## Remove TLS from the Database
+
+Now, we are going to remove TLS from this database using a SingleStoreOpsRequest.
+
+### Create SingleStoreOpsRequest
+
+Below is the YAML of the `SingleStoreOpsRequest` CRO that we are going to create,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdbops-remove-tls
+ namespace: demo
+spec:
+ type: ReconfigureTLS
+ databaseRef:
+ name: sample-sdb
+ tls:
+ remove: true
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `sample-sdb` database.
+- `spec.type` specifies that we are performing `ReconfigureTLS` on our database.
+- `spec.tls.remove` specifies that we want to remove tls from this database.
+
+Let's create the `SingleStoreOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/reconfigure-tls/cluster/examples/sdbops-remove-tls.yaml
+singlestoreopsrequest.ops.kubedb.com/sdbops-remove-tls created
+```
+
+#### Verify TLS Removed Successfully
+
+Let's wait for `SingleStoreOpsRequest` to be `Successful`. Run the following command to watch `SingleStoreOpsRequest` CRO,
+
+```bash
+$ kubectl get singlestoreopsrequest -n demo
+NAME TYPE STATUS AGE
+sdbops-remove-tls ReconfigureTLS Successful 27m
+```
+
+We can see from the above output that the `SingleStoreOpsRequest` has succeeded. If we describe the `SingleStoreOpsRequest` we will get an overview of the steps that were followed.
+
+Now, Let's exec into the database and find out that TLS is disabled or not.
+
+```bash
+$ kubectl exec -it -n demo sample-sdb-aggregator-0 -- bash
+Defaulted container "singlestore" out of: singlestore, singlestore-coordinator, singlestore-init (init)
+[memsql@sample-sdb-aggregator-0 /]$ ls etc/memsql/
+memsql_exporter.cnf memsqlctl.hcl
+[memsql@sample-sdb-aggregator-0 /]$
+[memsql@sample-sdb-aggregator-0 /]$ memsql -uroot -p$ROOT_PASSWORD
+singlestore-client: [Warning] Using a password on the command line interface can be insecure.
+Welcome to the MySQL monitor. Commands end with ; or \g.
+Your MySQL connection id is 840
+Server version: 5.7.32 SingleStoreDB source distribution (compatible; MySQL Enterprise & MySQL Commercial)
+
+Copyright (c) 2000, 2022, Oracle and/or its affiliates.
+
+Oracle is a registered trademark of Oracle Corporation and/or its
+affiliates. Other names may be trademarks of their respective
+owners.
+
+Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
+
+singlestore> show variables like '%ssl%';
++---------------------------------+------------+
+| Variable_name | Value |
++---------------------------------+------------+
+| default_user_require_ssl | OFF |
+| exporter_ssl_ca | |
+| exporter_ssl_capath | |
+| exporter_ssl_cert | |
+| exporter_ssl_key | |
+| exporter_ssl_key_passphrase | [redacted] |
+| have_openssl | OFF |
+| have_ssl | OFF |
+| jwks_ssl_ca_certificate | |
+| node_replication_ssl_only | OFF |
+| openssl_version | 805306480 |
+| processlist_rpc_json_max_size | 2048 |
+| ssl_ca | |
+| ssl_capath | |
+| ssl_cert | |
+| ssl_cipher | |
+| ssl_fips_mode | OFF |
+| ssl_key | |
+| ssl_key_passphrase | [redacted] |
+| ssl_last_reload_attempt_time | |
+| ssl_last_successful_reload_time | |
++---------------------------------+------------+
+21 rows in set (0.00 sec)
+
+singlestore> exit
+Bye
+```
+
+So, we can see from the above that, output that tls is disabled successfully.
+
+## Cleaning up
+
+To cleanup the Kubernetes resources created by this tutorial, run:
+
+```bash
+$ kubectl delete sdb -n demo --all
+$ kubectl delete issuer -n demo --all
+$ kubectl delete singlestoreopsrequest -n demo --all
+$ kubectl delete ns demo
+```
diff --git a/docs/guides/singlestore/reconfigure-tls/overview/images/reconfigure-tls.svg b/docs/guides/singlestore/reconfigure-tls/overview/images/reconfigure-tls.svg
new file mode 100644
index 0000000000..cdb554c409
--- /dev/null
+++ b/docs/guides/singlestore/reconfigure-tls/overview/images/reconfigure-tls.svg
@@ -0,0 +1,99 @@
+
diff --git a/docs/guides/singlestore/reconfigure-tls/overview/index.md b/docs/guides/singlestore/reconfigure-tls/overview/index.md
new file mode 100644
index 0000000000..39f958d519
--- /dev/null
+++ b/docs/guides/singlestore/reconfigure-tls/overview/index.md
@@ -0,0 +1,54 @@
+---
+title: Reconfiguring TLS of SingleStore Database
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-reconfigure-tls-overview
+ name: Overview
+ parent: guides-sdb-reconfigure-tls
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Reconfiguring TLS of SingleStore Database
+
+This guide will give an overview on how KubeDB Ops Manager reconfigures TLS configuration i.e. add TLS, remove TLS, update issuer/cluster issuer or Certificates and rotate the certificates of a `SingleStore` database.
+
+## Before You Begin
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [SingleStore](/docs/guides/singlestore/concepts/singlestore.md)
+ - [SingleStoreOpsRequest](/docs/guides/singlestore/concepts/opsrequest.md)
+
+## How Reconfiguring SingleStore TLS Configuration Process Works
+
+The following diagram shows how KubeDB Ops Manager reconfigures TLS of a `SingleStore` database. Open the image in a new tab to see the enlarged version.
+
+
+
+The Reconfiguring SingleStore TLS process consists of the following steps:
+
+1. At first, a user creates a `SingleStore` Custom Resource Object (CRO).
+
+2. `KubeDB` Provisioner operator watches the `SingleStore` CRO.
+
+3. When the operator finds a `SingleStore` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc.
+
+4. Then, in order to reconfigure the TLS configuration of the `SingleStore` database the user creates a `SingleStoreOpsRequest` CR with desired information.
+
+5. `KubeDB` Ops-manager operator watches the `SingleStoreOpsRequest` CR.
+
+6. When it finds a `SingleStoreOpsRequest` CR, it pauses the `SingleStore` object which is referred from the `SingleStoreOpsRequest`. So, the `KubeDB` Community operator doesn't perform any operations on the `SingleStore` object during the reconfiguring TLS process.
+
+7. Then the `KubeDB` Enterprise operator will add, remove, update or rotate TLS configuration based on the Ops Request yaml.
+
+8. Then the `KubeDB` Enterprise operator will restart all the Pods of the database so that they restart with the new TLS configuration defined in the `SingleStoreOpsRequest` CR.
+
+9. After the successful reconfiguring of the `SingleStore` TLS, the `KubeDB` Enterprise operator resumes the `SingleStore` object so that the `KubeDB` Community operator resumes its usual operations.
+
+In the next docs, we are going to show a step by step guide on reconfiguring TLS configuration of a SingleStore database using `SingleStoreOpsRequest` CRD.
\ No newline at end of file
diff --git a/docs/guides/singlestore/reconfigure/_index.md b/docs/guides/singlestore/reconfigure/_index.md
new file mode 100644
index 0000000000..831d6b443b
--- /dev/null
+++ b/docs/guides/singlestore/reconfigure/_index.md
@@ -0,0 +1,10 @@
+---
+title: Reconfigure SingleStore Configuration
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-reconfigure
+ name: Reconfigure
+ parent: guides-singlestore
+ weight: 46
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/singlestore/reconfigure/overview/images/sdb-reconfigure.svg b/docs/guides/singlestore/reconfigure/overview/images/sdb-reconfigure.svg
new file mode 100644
index 0000000000..232221941c
--- /dev/null
+++ b/docs/guides/singlestore/reconfigure/overview/images/sdb-reconfigure.svg
@@ -0,0 +1,99 @@
+
diff --git a/docs/guides/singlestore/reconfigure/overview/index.md b/docs/guides/singlestore/reconfigure/overview/index.md
new file mode 100644
index 0000000000..88796eb9f2
--- /dev/null
+++ b/docs/guides/singlestore/reconfigure/overview/index.md
@@ -0,0 +1,54 @@
+---
+title: Reconfiguring SingleStore
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-reconfigure-overview
+ name: Overview
+ parent: guides-sdb-reconfigure
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+### Reconfiguring SingleStore
+
+This guide will give an overview on how KubeDB Ops Manager reconfigures `SingleStore`.
+
+## Before You Begin
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [SingleStore](/docs/guides/singlestore/concepts/)
+ - [SingleStoreOpsRequest](/docs/guides/singlestore/concepts/opsrequest.md)
+
+## How Reconfiguring SingleStore Process Works
+
+The following diagram shows how KubeDB Ops Manager reconfigures `SingleStore` database components. Open the image in a new tab to see the enlarged version.
+
+
+
+The Reconfiguring SingleStore process consists of the following steps:
+
+1. At first, a user creates a `SingleStore` Custom Resource (CR).
+
+2. `KubeDB` Provisioner operator watches the `SingleStore` CR.
+
+3. When the operator finds a `SingleStore` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc.
+
+4. Then, in order to reconfigure the `SingleStore` standalone or cluster the user creates a `SingleStoreOpsRequest` CR with desired information.
+
+5. `KubeDB` Ops-manager operator watches the `SingleStoreOpsRequest` CR.
+
+6. When it finds a `SingleStoreOpsRequest` CR, it halts the `SingleStore` object which is referred from the `SingleStoreOpsRequest`. So, the `KubeDB` provisioner operator doesn't perform any operations on the `SingleStore` object during the reconfiguring process.
+
+7. Then the `KubeDB` Ops-manager operator will replace the existing configuration with the new configuration provided or merge the new configuration with the existing configuration according to the `SingleStoreOpsRequest` CR.
+
+8. Then the `KubeDB` Ops-manager operator will restart the related PetSet Pods so that they restart with the new configuration defined in the `SingleStoreOpsRequest` CR.
+
+9. After the successful reconfiguring of the `SingleStore`, the `KubeDB` Ops-manager operator resumes the `SingleStore` object so that the `KubeDB` Provisioner operator resumes its usual operations.
+
+In the next docs, we are going to show a step by step guide on reconfiguring SingleStore database components using `SingleStoreOpsRequest` CRD.
\ No newline at end of file
diff --git a/docs/guides/singlestore/reconfigure/reconfigure-steps/index.md b/docs/guides/singlestore/reconfigure/reconfigure-steps/index.md
new file mode 100644
index 0000000000..c8e8120137
--- /dev/null
+++ b/docs/guides/singlestore/reconfigure/reconfigure-steps/index.md
@@ -0,0 +1,476 @@
+---
+title: Reconfigure SingleStore Configuration
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-reconfigure-reconfigure-steps
+ name: Reconfigure OpsRequest
+ parent: guides-sdb-reconfigure
+ weight: 20
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Reconfigure SingleStore Cluster Database
+
+This guide will show you how to use `KubeDB` Ops-manager operator to reconfigure a SingleStore Cluster.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster.
+
+- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- You should be familiar with the following `KubeDB` concepts:
+- [SingleStore](/docs/guides/singlestore/concepts/singlestore.md)
+- [SingleStore Cluster](/docs/guides/singlestore/clustering)
+- [SingleStoreOpsRequest](/docs/guides/singlestore/concepts/opsrequest.md)
+- [Reconfigure Overview](/docs/guides/singlestore/reconfigure/overview)
+
+To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial.
+
+```bash
+$ kubectl create ns demo
+namespace/demo created
+```
+
+Now, we are going to deploy a `SingleStore` Cluster using a supported version by `KubeDB` operator. Then we are going to apply `SingleStoreOpsRequest` to reconfigure its configuration.
+
+## Create SingleStore License Secret
+
+We need SingleStore License to create SingleStore Database. So, Ensure that you have acquired a license and then simply pass the license by secret.
+
+```bash
+$ kubectl create secret generic -n demo license-secret \
+ --from-literal=username=license \
+ --from-literal=password='your-license-set-here'
+secret/license-secret created
+```
+
+## Deploy SingleStore
+
+At first, we will create `sdb-config.cnf` file containing required configuration settings.
+
+```ini
+$ cat sdb-config.cnf
+[server]
+max_connections = 250
+read_buffer_size = 122880
+
+```
+
+Here, `max_connections` is set to `250`, whereas the default value is `100000`. Likewise, `read_buffer_size` has the deafult value `131072`.
+
+Now, we will create a secret with this configuration file.
+
+```bash
+$ kubectl create secret generic -n demo sdb-configuration --from-file=./sdb-config.cnf
+secret/sdb-configuration created
+```
+
+In this section, we are going to create a SingleStore object specifying `spec.topology.aggreagtor.configSecret` field to apply this custom configuration. Below is the YAML of the `SingleStore` CR that we are going to create,
+
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: custom-sdb
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 2
+ configSecret:
+ name: sdb-configuration
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ configSecret:
+ name: sdb-configuration
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
+```
+
+Let's create the `SingleStore` CR we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/reconfigure/reconfigure-steps/yamls/custom-sdb.yaml
+singlestore.kubedb.com/custom-sdb created
+
+
+Now, wait until `custom-sdb` has status `Ready`. i.e,
+
+```bash
+$ kubectl get pod -n demo
+NAME READY STATUS RESTARTS AGE
+custom-sdb-aggregator-0 2/2 Running 0 94s
+custom-sdb-aggregator-1 2/2 Running 0 88s
+custom-sdb-leaf-0 2/2 Running 0 91s
+custom-sdb-leaf-1 2/2 Running 0 86s
+
+$ kubectl get sdb -n demo
+NAME TYPE VERSION STATUS AGE
+custom-sdb kubedb.com/v1alpha2 8.7.10 Ready 4m29s
+```
+
+We can see the database is in ready phase so it can accept conncetion.
+
+Now, we will check if the database has started with the custom configuration we have provided.
+
+> Read the comment written for the following commands. They contain the instructions and explanations of the commands.
+
+```bash
+# Connceting to the database
+$ kubectl exec -it -n demo custom-sdb-aggregator-0 -- bash
+Defaulted container "singlestore" out of: singlestore, singlestore-coordinator, singlestore-init (init)
+[memsql@custom-sdb-aggregator-0 /]$ memsql -uroot -p$ROOT_PASSWORD
+singlestore-client: [Warning] Using a password on the command line interface can be insecure.
+Welcome to the MySQL monitor. Commands end with ; or \g.
+Your MySQL connection id is 208
+Server version: 5.7.32 SingleStoreDB source distribution (compatible; MySQL Enterprise & MySQL Commercial)
+
+Copyright (c) 2000, 2022, Oracle and/or its affiliates.
+
+Oracle is a registered trademark of Oracle Corporation and/or its
+affiliates. Other names may be trademarks of their respective
+owners.
+
+Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
+
+# value of `max_conncetions` is same as provided
+singlestore> show variables like 'max_connections';
++-----------------+-------+
+| Variable_name | Value |
++-----------------+-------+
+| max_connections | 250 |
++-----------------+-------+
+1 row in set (0.00 sec)
+
+# value of `read_buffer_size` is same as provided
+singlestore> show variables like 'read_buffer_size';
++------------------+--------+
+| Variable_name | Value |
++------------------+--------+
+| read_buffer_size | 122880 |
++------------------+--------+
+1 row in set (0.00 sec)
+
+singlestore> exit
+Bye
+
+```
+
+As we can see from the configuration of ready singlestore, the value of `max_connections` has been set to `250` and `read_buffer_size` has been set to `122880`.
+
+### Reconfigure using new config secret
+
+Now we will reconfigure this database to set `max_connections` to `350` and `read_buffer_size` to `132880`.
+
+Now, we will create new file `new-sdb-config.cnf` containing required configuration settings.
+
+#### Create SingleStoreOpsRequest
+
+Now, we will use this secret to replace the previous secret using a `SingleStoreOpsRequest` CR. The `SingleStoreOpsRequest` yaml is given below,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdbops-reconfigure-config
+ namespace: demo
+spec:
+ type: Reconfigure
+ databaseRef:
+ name: custom-sdb
+ configuration:
+ aggregator:
+ applyConfig:
+ sdb-apply.cnf: |-
+ max_connections = 550
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are reconfiguring `custom-sdb` database.
+- `spec.type` specifies that we are performing `Reconfigure` on our database.
+- `spec.configuration.aggregator.applyConfig` is a map where key supports 1 values, namely `sdb-apply.cnf` for aggregator nodes. You can also specifies `spec.configuration.leaf.applyConfig` which is a map where key supports 1 values, namely `sdb-apply.cnf` for leaf nodes.
+
+Let's create the `SinglestoreOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/reconfigure-steps/yamls/reconfigure-using-applyConfig.yaml
+singlestoreopsrequest.ops.kubedb.com/sdbops-reconfigure-config created
+```
+
+#### Verify the new configuration is working
+
+If everything goes well, `KubeDB` Ops-manager operator will update the `configSecret` of `SingleStore` object.
+
+Let's wait for `SinglestoreOpsRequest` to be `Successful`. Run the following command to watch `SinglestoreOpsRequest` CR,
+
+```bash
+$ kubectl get singlestoreopsrequest --all-namespaces
+NAMESPACE NAME TYPE STATUS AGE
+demo sdbops-reconfigure-config Reconfigure Successful 10m
+```
+
+We can see from the above output that the `SinglestoreOpsRequest` has succeeded. If we describe the `SinglestoreOpsRequest` we will get an overview of the steps that were followed to reconfigure the database.
+
+```bash
+$ kubectl describe singlestoreopsrequest -n demo sdbops-reconfigure-config
+Name: sdbops-reconfigure-config
+Namespace: demo
+Labels:
+Annotations:
+API Version: ops.kubedb.com/v1alpha1
+Kind: SinglestoreOpsRequest
+Metadata:
+ Creation Timestamp: 2024-10-04T10:18:22Z
+ Generation: 1
+ Resource Version: 2114236
+ UID: 56b37f6d-d8be-49c7-a588-9740863edd2a
+Spec:
+ Apply: IfReady
+ Configuration:
+ Aggregator:
+ Apply Config:
+ sdb-apply.cnf: max_connections = 550
+ Database Ref:
+ Name: custom-sdb
+ Type: Reconfigure
+Status:
+ Conditions:
+ Last Transition Time: 2024-10-04T10:18:22Z
+ Message: Singlestore ops-request has started to expand volume of singlestore nodes.
+ Observed Generation: 1
+ Reason: Configuration
+ Status: True
+ Type: Configuration
+ Last Transition Time: 2024-10-04T10:18:28Z
+ Message: Successfully paused database
+ Observed Generation: 1
+ Reason: DatabasePauseSucceeded
+ Status: True
+ Type: DatabasePauseSucceeded
+ Last Transition Time: 2024-10-04T10:18:28Z
+ Message: Successfully updated PetSets Resources
+ Observed Generation: 1
+ Reason: UpdatePetSets
+ Status: True
+ Type: UpdatePetSets
+ Last Transition Time: 2024-10-04T10:19:53Z
+ Message: Successfully Restarted Pods With Resources
+ Observed Generation: 1
+ Reason: RestartPods
+ Status: True
+ Type: RestartPods
+ Last Transition Time: 2024-10-04T10:18:33Z
+ Message: get pod; ConditionStatus:True; PodName:custom-sdb-aggregator-0
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--custom-sdb-aggregator-0
+ Last Transition Time: 2024-10-04T10:18:33Z
+ Message: evict pod; ConditionStatus:True; PodName:custom-sdb-aggregator-0
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--custom-sdb-aggregator-0
+ Last Transition Time: 2024-10-04T10:19:08Z
+ Message: check pod ready; ConditionStatus:True; PodName:custom-sdb-aggregator-0
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodReady--custom-sdb-aggregator-0
+ Last Transition Time: 2024-10-04T10:19:13Z
+ Message: get pod; ConditionStatus:True; PodName:custom-sdb-aggregator-1
+ Observed Generation: 1
+ Status: True
+ Type: GetPod--custom-sdb-aggregator-1
+ Last Transition Time: 2024-10-04T10:19:13Z
+ Message: evict pod; ConditionStatus:True; PodName:custom-sdb-aggregator-1
+ Observed Generation: 1
+ Status: True
+ Type: EvictPod--custom-sdb-aggregator-1
+ Last Transition Time: 2024-10-04T10:19:48Z
+ Message: check pod ready; ConditionStatus:True; PodName:custom-sdb-aggregator-1
+ Observed Generation: 1
+ Status: True
+ Type: CheckPodReady--custom-sdb-aggregator-1
+ Last Transition Time: 2024-10-04T10:19:53Z
+ Message: Successfully completed the reconfiguring for Singlestore
+ Observed Generation: 1
+ Reason: Successful
+ Status: True
+ Type: Successful
+ Observed Generation: 1
+ Phase: Successful
+Events:
+
+```
+
+Now let's connect to a singlestore instance and run a memsql internal command to check the new configuration we have provided.
+
+```bash
+$ kubectl exec -it -n demo custom-sdb-aggregator-0 -- bash
+Defaulted container "singlestore" out of: singlestore, singlestore-coordinator, singlestore-init (init)
+[memsql@custom-sdb-aggregator-0 /]$ memsql -uroot -p$ROOT_PASSWORD
+singlestore-client: [Warning] Using a password on the command line interface can be insecure.
+Welcome to the MySQL monitor. Commands end with ; or \g.
+Your MySQL connection id is 626
+Server version: 5.7.32 SingleStoreDB source distribution (compatible; MySQL Enterprise & MySQL Commercial)
+
+Copyright (c) 2000, 2022, Oracle and/or its affiliates.
+
+Oracle is a registered trademark of Oracle Corporation and/or its
+affiliates. Other names may be trademarks of their respective
+owners.
+
+Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
+
+singlestore> show variables like 'max_connections';
++-----------------+-------+
+| Variable_name | Value |
++-----------------+-------+
+| max_connections | 550 |
++-----------------+-------+
+1 row in set (0.00 sec)
+
+singlestore> exit
+Bye
+
+
+```
+
+As we can see from the configuration has changed, the value of `max_connections` has been changed from `250` to `550`. So the reconfiguration of the database is successful.
+
+### Remove Custom Configuration
+
+We can also remove exisiting custom config using `SinglestoreOpsRequest`. Provide `true` to field `spec.configuration.aggregator.removeCustomConfig` and make an Ops Request to remove existing custom configuration.
+
+#### Create SingleStoreOpsRequest
+
+Lets create an `SinglestoreOpsRequest` having `spec.configuration.aggregator.removeCustomConfig` is equal `true`,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdbops-reconfigure-remove
+ namespace: demo
+spec:
+ type: Reconfigure
+ databaseRef:
+ name: custom-sdb
+ configuration:
+ aggregator:
+ removeCustomConfig: true
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are reconfiguring `custom-sdb` database.
+- `spec.type` specifies that we are performing `Reconfigure` on our database.
+- `spec.configuration.aggregator.removeCustomConfig` is a bool field that should be `true` when you want to remove existing custom configuration.
+
+Let's create the `SingleStoreOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/reconfigure/yamls/reconfigure-steps/reconfigure-remove.yaml
+singlestoreopsrequest.ops.kubedb.com/sdbops-reconfigure-remove created
+```
+
+#### Verify the new configuration is working
+
+If everything goes well, `KubeDB` Ops-manager operator will update the `configSecret` of `SingleStore` object.
+
+Let's wait for `SingleStoreOpsRequest` to be `Successful`. Run the following command to watch `SingleStoreOpsRequest` CR,
+
+```bash
+$ kubectl get singlestoreopsrequest -n demo
+NAME TYPE STATUS AGE
+sdbops-reconfigure-remove Reconfigure Successful 5m31s
+```
+
+Now let's connect to a singlestore instance and run a singlestore internal command to check the new configuration we have provided.
+
+```bash
+$ kubectl exec -it -n demo custom-sdb-aggregator-0 -- bash
+Defaulted container "singlestore" out of: singlestore, singlestore-coordinator, singlestore-init (init)
+[memsql@custom-sdb-aggregator-0 /]$ memsql -uroot -p$ROOT_PASSWORD
+singlestore-client: [Warning] Using a password on the command line interface can be insecure.
+Welcome to the MySQL monitor. Commands end with ; or \g.
+Your MySQL connection id is 166
+Server version: 5.7.32 SingleStoreDB source distribution (compatible; MySQL Enterprise & MySQL Commercial)
+
+Copyright (c) 2000, 2022, Oracle and/or its affiliates.
+
+Oracle is a registered trademark of Oracle Corporation and/or its
+affiliates. Other names may be trademarks of their respective
+owners.
+
+Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
+
+singlestore>
+singlestore>
+singlestore> show variables like 'max_connections';
++-----------------+-------+
+| Variable_name | Value |
++-----------------+-------+
+| max_connections | 100000|
++-----------------+-------+
+1 row in set (0.00 sec)
+
+singlestore> exit
+Bye
+
+
+```
+
+As we can see from the configuration has changed to its default value. So removal of existing custom configuration using `SingleStoreOpsRequest` is successful.
+
+## Cleaning Up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+```bash
+$ kubectl delete singlestore -n demo custom-sdb
+$ kubectl delete singlestoreopsrequest -n demo sdbops-reconfigure-config sdbops-reconfigure-remove
+$ kubectl delete ns demo
+```
diff --git a/docs/guides/singlestore/reconfigure/reconfigure-steps/yamls/custom-sdb.yaml b/docs/guides/singlestore/reconfigure/reconfigure-steps/yamls/custom-sdb.yaml
new file mode 100644
index 0000000000..8438852626
--- /dev/null
+++ b/docs/guides/singlestore/reconfigure/reconfigure-steps/yamls/custom-sdb.yaml
@@ -0,0 +1,56 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: custom-sdb
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 2
+ configSecret:
+ name: sdb-configuration
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ configSecret:
+ name: sdb-configuration
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
\ No newline at end of file
diff --git a/docs/guides/singlestore/reconfigure/reconfigure-steps/yamls/reconfigure-remove.yaml b/docs/guides/singlestore/reconfigure/reconfigure-steps/yamls/reconfigure-remove.yaml
new file mode 100644
index 0000000000..49cdc7cc54
--- /dev/null
+++ b/docs/guides/singlestore/reconfigure/reconfigure-steps/yamls/reconfigure-remove.yaml
@@ -0,0 +1,12 @@
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdbops-reconfigure-config
+ namespace: demo
+spec:
+ type: Reconfigure
+ databaseRef:
+ name: custom-sdb
+ configuration:
+ aggregator:
+ removeCustomConfig: true
\ No newline at end of file
diff --git a/docs/guides/singlestore/reconfigure/reconfigure-steps/yamls/reconfigure-using-applyConfig.yaml b/docs/guides/singlestore/reconfigure/reconfigure-steps/yamls/reconfigure-using-applyConfig.yaml
new file mode 100644
index 0000000000..ecacdbd1ad
--- /dev/null
+++ b/docs/guides/singlestore/reconfigure/reconfigure-steps/yamls/reconfigure-using-applyConfig.yaml
@@ -0,0 +1,14 @@
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdbops-reconfigure-config
+ namespace: demo
+spec:
+ type: Reconfigure
+ databaseRef:
+ name: custom-sdb
+ configuration:
+ aggregator:
+ applyConfig:
+ sdb-apply.cnf: |-
+ max_connections = 550
\ No newline at end of file
diff --git a/docs/guides/singlestore/restart/_index.md b/docs/guides/singlestore/restart/_index.md
new file mode 100644
index 0000000000..100ea6b049
--- /dev/null
+++ b/docs/guides/singlestore/restart/_index.md
@@ -0,0 +1,10 @@
+---
+title: Restart SingleStore
+menu:
+ docs_{{ .version }}:
+ identifier: sdb-restart
+ name: Restart
+ parent: guides-singlestore
+ weight: 49
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/singlestore/restart/restart.md b/docs/guides/singlestore/restart/restart.md
new file mode 100644
index 0000000000..5fa613bd30
--- /dev/null
+++ b/docs/guides/singlestore/restart/restart.md
@@ -0,0 +1,266 @@
+---
+title: Restart SingleStore
+menu:
+ docs_{{ .version }}:
+ identifier: sdb-restart-details
+ name: Restart SingleStore
+ parent: sdb-restart
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Restart SingleStore
+
+KubeDB supports restarting the SingleStore database via a SingleStoreOpsRequest. Restarting is useful if some pods are got stuck in some phase, or they are not working correctly. This tutorial will show you how to use that.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial.
+
+```bash
+ $ kubectl create ns demo
+ namespace/demo created
+ ```
+
+> Note: YAML files used in this tutorial are stored in [docs/guides/singlestore/restart/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/singlestore/restart/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
+
+## Create SingleStore License Secret
+
+We need SingleStore License to create SingleStore Database. So, Ensure that you have acquired a license and then simply pass the license by secret.
+
+```bash
+$ kubectl create secret generic -n demo license-secret \
+ --from-literal=username=license \
+ --from-literal=password='your-license-set-here'
+secret/license-secret created
+```
+
+## Deploy SingleStore
+
+In this section, we are going to deploy a SingleStore database using KubeDB.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sdb-sample
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+```
+
+Let's create the `SingleStore` CR we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/restart/yamls/sdb-sample.yaml
+singlestore.kubedb.com/sdb-sample created
+```
+**Wait for the database to be ready:**
+
+Now, wait for `SingleStore` going on `Ready` state
+
+```bash
+kubectl get singlestore -n demo
+NAME TYPE VERSION STATUS AGE
+sdb-sample kubedb.com/v1alpha2 8.7.10 Ready 2m
+
+```
+
+## Apply Restart opsRequest
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: restart
+ namespace: demo
+spec:
+ type: Restart
+ databaseRef:
+ name: sdb-sample
+ timeout: 10m
+ apply: Always
+```
+
+- `spec.type` specifies the Type of the ops Request
+- `spec.databaseRef` holds the name of the SingleStore database. The db should be available in the same namespace as the opsRequest
+- The meaning of `spec.timeout` & `spec.apply` fields will be found [here](/docs/guides/rabbitmq/concepts/opsrequest.md#spectimeout)
+
+> Note: The method of restarting the standalone & clustered singlestore is exactly same as above. All you need, is to specify the corresponding SingleStore name in `spec.databaseRef.name` section.
+
+Let's create the `SingleStoreOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/restart/yamls/restart-ops.yaml
+singlestoreopsrequest.ops.kubedb.com/restart created
+```
+
+Now the Ops-manager operator will restart the pods sequentially by their cardinal suffix.
+
+```shell
+$ kubectl get singlestoreopsrequest -n demo
+NAME TYPE STATUS AGE
+restart Restart Successful 10m
+
+$ kubectl get singlestoreopsrequest -n demo restart -oyaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ annotations:
+ kubectl.kubernetes.io/last-applied-configuration: |
+ {"apiVersion":"ops.kubedb.com/v1alpha1","kind":"SinglestoreOpsRequest","metadata":{"annotations":{},"name":"restart","namespace":"demo"},"spec":{"apply":"Always","databaseRef":{"name":"sdb-sample"},"timeout":"10m","type":"Restart"}}
+ creationTimestamp: "2024-10-28T05:31:00Z"
+ generation: 1
+ name: restart
+ namespace: demo
+ resourceVersion: "3549386"
+ uid: b2512e44-89eb-4f1b-ae0d-232caee94f01
+spec:
+ apply: Always
+ databaseRef:
+ name: sdb-sample
+ timeout: 10m
+ type: Restart
+status:
+ conditions:
+ - lastTransitionTime: "2024-10-28T05:31:00Z"
+ message: Singlestore ops-request has started to restart singlestore nodes
+ observedGeneration: 1
+ reason: Restart
+ status: "True"
+ type: Restart
+ - lastTransitionTime: "2024-10-28T05:31:03Z"
+ message: Successfully paused database
+ observedGeneration: 1
+ reason: DatabasePauseSucceeded
+ status: "True"
+ type: DatabasePauseSucceeded
+ - lastTransitionTime: "2024-10-28T05:33:33Z"
+ message: Successfully restarted Singlestore nodes
+ observedGeneration: 1
+ reason: RestartNodes
+ status: "True"
+ type: RestartNodes
+ - lastTransitionTime: "2024-10-28T05:31:08Z"
+ message: get pod; ConditionStatus:True; PodName:sdb-sample-aggregator-0
+ observedGeneration: 1
+ status: "True"
+ type: GetPod--sdb-sample-aggregator-0
+ - lastTransitionTime: "2024-10-28T05:31:08Z"
+ message: evict pod; ConditionStatus:True; PodName:sdb-sample-aggregator-0
+ observedGeneration: 1
+ status: "True"
+ type: EvictPod--sdb-sample-aggregator-0
+ - lastTransitionTime: "2024-10-28T05:31:48Z"
+ message: check pod ready; ConditionStatus:True; PodName:sdb-sample-aggregator-0
+ observedGeneration: 1
+ status: "True"
+ type: CheckPodReady--sdb-sample-aggregator-0
+ - lastTransitionTime: "2024-10-28T05:31:53Z"
+ message: get pod; ConditionStatus:True; PodName:sdb-sample-leaf-0
+ observedGeneration: 1
+ status: "True"
+ type: GetPod--sdb-sample-leaf-0
+ - lastTransitionTime: "2024-10-28T05:31:53Z"
+ message: evict pod; ConditionStatus:True; PodName:sdb-sample-leaf-0
+ observedGeneration: 1
+ status: "True"
+ type: EvictPod--sdb-sample-leaf-0
+ - lastTransitionTime: "2024-10-28T05:32:38Z"
+ message: check pod ready; ConditionStatus:True; PodName:sdb-sample-leaf-0
+ observedGeneration: 1
+ status: "True"
+ type: CheckPodReady--sdb-sample-leaf-0
+ - lastTransitionTime: "2024-10-28T05:32:43Z"
+ message: get pod; ConditionStatus:True; PodName:sdb-sample-leaf-1
+ observedGeneration: 1
+ status: "True"
+ type: GetPod--sdb-sample-leaf-1
+ - lastTransitionTime: "2024-10-28T05:32:43Z"
+ message: evict pod; ConditionStatus:True; PodName:sdb-sample-leaf-1
+ observedGeneration: 1
+ status: "True"
+ type: EvictPod--sdb-sample-leaf-1
+ - lastTransitionTime: "2024-10-28T05:33:28Z"
+ message: check pod ready; ConditionStatus:True; PodName:sdb-sample-leaf-1
+ observedGeneration: 1
+ status: "True"
+ type: CheckPodReady--sdb-sample-leaf-1
+ - lastTransitionTime: "2024-10-28T05:33:33Z"
+ message: Controller has successfully restart the Singlestore replicas
+ observedGeneration: 1
+ reason: Successful
+ status: "True"
+ type: Successful
+ observedGeneration: 1
+ phase: Successful
+```
+
+
+## Cleaning up
+
+To cleanup the Kubernetes resources created by this tutorial, run:
+
+```bash
+kubectl delete singlestoreopsrequest -n demo restart
+kubectl delete singlestore -n demo sdb-sample
+kubectl delete ns demo
+```
+
+## Next Steps
+
+- Detail concepts of [SingleStore object](/docs/guides/singlestore/concepts/singlestore.md).
+- Monitor your SingleStore database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/singlestore/monitoring/prometheus-operator/index.md).
+- Monitor your SingleStore database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/singlestore/monitoring/builtin-prometheus/index.md).
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
diff --git a/docs/guides/singlestore/restart/yamls/restart-ops.yaml b/docs/guides/singlestore/restart/yamls/restart-ops.yaml
new file mode 100644
index 0000000000..4271b8adb0
--- /dev/null
+++ b/docs/guides/singlestore/restart/yamls/restart-ops.yaml
@@ -0,0 +1,11 @@
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: restart
+ namespace: demo
+spec:
+ type: Restart
+ databaseRef:
+ name: sdb-sample
+ timeout: 10m
+ apply: Always
\ No newline at end of file
diff --git a/docs/guides/singlestore/restart/yamls/sdb-sample.yaml b/docs/guides/singlestore/restart/yamls/sdb-sample.yaml
new file mode 100644
index 0000000000..67a26cdcf0
--- /dev/null
+++ b/docs/guides/singlestore/restart/yamls/sdb-sample.yaml
@@ -0,0 +1,51 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sdb-sample
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
\ No newline at end of file
diff --git a/docs/guides/singlestore/scaling/_index.md b/docs/guides/singlestore/scaling/_index.md
new file mode 100644
index 0000000000..809575b24b
--- /dev/null
+++ b/docs/guides/singlestore/scaling/_index.md
@@ -0,0 +1,10 @@
+---
+title: Scaling SingleStore
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-scaling
+ name: Scaling
+ parent: guides-singlestore
+ weight: 43
+menu_name: docs_{{ .version }}
+---
\ No newline at end of file
diff --git a/docs/guides/singlestore/scaling/horizontal-scaling/_index.md b/docs/guides/singlestore/scaling/horizontal-scaling/_index.md
new file mode 100644
index 0000000000..7c75aacefb
--- /dev/null
+++ b/docs/guides/singlestore/scaling/horizontal-scaling/_index.md
@@ -0,0 +1,10 @@
+---
+title: Horizontal Scaling
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-scaling-horizontal
+ name: Horizontal Scaling
+ parent: guides-sdb-scaling
+ weight: 10
+menu_name: docs_{{ .version }}
+---
\ No newline at end of file
diff --git a/docs/guides/singlestore/scaling/horizontal-scaling/cluster/example/sample-sdb.yaml b/docs/guides/singlestore/scaling/horizontal-scaling/cluster/example/sample-sdb.yaml
new file mode 100644
index 0000000000..437685dccf
--- /dev/null
+++ b/docs/guides/singlestore/scaling/horizontal-scaling/cluster/example/sample-sdb.yaml
@@ -0,0 +1,52 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sample-sdb
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
\ No newline at end of file
diff --git a/docs/guides/singlestore/scaling/horizontal-scaling/cluster/example/sdbops-downscale.yaml b/docs/guides/singlestore/scaling/horizontal-scaling/cluster/example/sdbops-downscale.yaml
new file mode 100644
index 0000000000..4ca154c9d2
--- /dev/null
+++ b/docs/guides/singlestore/scaling/horizontal-scaling/cluster/example/sdbops-downscale.yaml
@@ -0,0 +1,11 @@
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdbops-scale-horizontal-down
+ namespace: demo
+spec:
+ type: HorizontalScaling
+ databaseRef:
+ name: sample-sdb
+ horizontalScaling:
+ leaf: 2
\ No newline at end of file
diff --git a/docs/guides/singlestore/scaling/horizontal-scaling/cluster/example/sdbops-upscale.yaml b/docs/guides/singlestore/scaling/horizontal-scaling/cluster/example/sdbops-upscale.yaml
new file mode 100644
index 0000000000..26c0b50178
--- /dev/null
+++ b/docs/guides/singlestore/scaling/horizontal-scaling/cluster/example/sdbops-upscale.yaml
@@ -0,0 +1,11 @@
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdbops-scale-horizontal-up
+ namespace: demo
+spec:
+ type: HorizontalScaling
+ databaseRef:
+ name: sample-sdb
+ horizontalScaling:
+ leaf: 3
\ No newline at end of file
diff --git a/docs/guides/singlestore/scaling/horizontal-scaling/cluster/index.md b/docs/guides/singlestore/scaling/horizontal-scaling/cluster/index.md
new file mode 100644
index 0000000000..7b289a738e
--- /dev/null
+++ b/docs/guides/singlestore/scaling/horizontal-scaling/cluster/index.md
@@ -0,0 +1,326 @@
+---
+title: Horizontal Scaling SingleStore
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-scaling-horizontal-cluster
+ name: Horizontal Scaling OpsRequest
+ parent: guides-sdb-scaling-horizontal
+ weight: 20
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Horizontal Scale SingleStore
+
+This guide will show you how to use `KubeDB` Enterprise operator to scale the cluster of a SingleStore database.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [SingleStore](/docs/guides/singlestore/concepts/singlestore.md)
+ - [SingleStore Cluster](/docs/guides/singlestore/clustering/)
+ - [SingleStoreOpsRequest](/docs/guides/singlestore/concepts/opsrequest.md)
+ - [Horizontal Scaling Overview](/docs/guides/singlestore/scaling/horizontal-scaling/overview/)
+
+To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial.
+
+```bash
+$ kubectl create ns demo
+namespace/demo created
+```
+
+## Apply Horizontal Scaling on Cluster
+
+Here, we are going to deploy a `SingleStore` cluster using a supported version by `KubeDB` operator. Then we are going to apply horizontal scaling on it.
+
+### Create SingleStore License Secret
+
+We need SingleStore License to create SingleStore Database. So, Ensure that you have acquired a license and then simply pass the license by secret.
+
+```bash
+$ kubectl create secret generic -n demo license-secret \
+ --from-literal=username=license \
+ --from-literal=password='your-license-set-here'
+secret/license-secret created
+```
+
+### Deploy SingleStore Cluster
+
+In this section, we are going to deploy a SingleStore cluster. Then, in the next section we will scale the database using `SingleStoreOpsRequest` CRD. Below is the YAML of the `SingleStore` CR that we are going to create,
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sample-sdb
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
+```
+
+Let's create the `SingleStore` CR we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/scaling/horizontal-scaling/cluster/example/sample-sdb.yaml
+singlestore.kubedb.com/sample-sdb created
+```
+
+Now, wait until `sample-sdb` has status `Ready`. i.e,
+
+```bash
+$ kubectl get singlestore -n demo
+NAME TYPE VERSION STATUS AGE
+sample-sdb kubedb.com/v1alpha2 8.7.10 Ready 86s
+```
+
+Let's check the number of `aggreagtor replicas` and `leaf replicas` this database has from the SingleStore object, number of pods the `aggregator-petset` and `leaf-petset` have,
+
+```bash
+$ kubectl get sdb -n demo sample-sdb -o json | jq '.spec.topology.aggregator.replicas'
+1
+$ kubectl get sdb -n demo sample-sdb -o json | jq '.spec.topology.leaf.replicas'
+2
+
+$ kubectl get petset -n demo sample-sdb-aggregator -o=jsonpath='{.spec.replicas}{"\n"}'
+1
+kubectl get petset -n demo sample-sdb-leaf -o=jsonpath='{.spec.replicas}{"\n"}'
+2
+
+
+```
+
+We can see from both command that the database has 1 `aggregator replicas` and 2 `leaf replicas` in the cluster.
+
+Also, we can verify the replicas of the from an internal memsqlctl command by execing into a replica.
+
+Now let's connect to a singlestore instance and run a memsqlctl internal command to check the number of replicas,
+
+```bash
+$ kubectl exec -it -n demo sample-sdb-aggregator-0 -- bash
+Defaulted container "singlestore" out of: singlestore, singlestore-coordinator, singlestore-init (init)
+[memsql@sample-sdb-aggregator-0 /]$ memsqlctl show-cluster
++---------------------+--------------------------------------------------+------+--------------------+-----------+-----------+--------+--------------------+------------------------------+--------+-------------------+
+| Role | Host | Port | Availability Group | Pair Host | Pair Port | State | Opened Connections | Average Roundtrip Latency ms | NodeId | Master Aggregator |
++---------------------+--------------------------------------------------+------+--------------------+-----------+-----------+--------+--------------------+------------------------------+--------+-------------------+
+| Leaf | sample-sdb-leaf-0.sample-sdb-pods.demo.svc | 3306 | 1 | null | null | online | 2 | | 2 | |
+| Leaf | sample-sdb-leaf-1.sample-sdb-pods.demo.svc | 3306 | 1 | null | null | online | 3 | | 3 | |
+| Aggregator (Leader) | sample-sdb-aggregator-0.sample-sdb-pods.demo.svc | 3306 | | null | null | online | 1 | null | 1 | 1 |
++---------------------+--------------------------------------------------+------+--------------------+-----------+-----------+--------+--------------------+------------------------------+--------+-------------------+
+
+
+```
+
+We can see from the above output that the cluster has 1 aggregator node and 2 leaf nodes.
+
+We are now ready to apply the `SingleStoreOpsRequest` CR to scale this database.
+
+## Scale Up Replicas
+
+Here, we are going to scale up the replicas of the `leaf nodes` to meet the desired number of replicas after scaling.
+
+#### Create SingleStoreOpsRequest
+
+In order to scale up the replicas of the `leaf nodes` of the database, we have to create a `SingleStoreOpsRequest` CR with our desired replicas. Below is the YAML of the `SingleStoreOpsRequest` CR that we are going to create,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdbops-scale-horizontal-up
+ namespace: demo
+spec:
+ type: HorizontalScaling
+ databaseRef:
+ name: sample-sdb
+ horizontalScaling:
+ leaf: 3
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing horizontal scaling operation on `sample-sdb` database.
+- `spec.type` specifies that we are performing `HorizontalScaling` on our database.
+- `spec.horizontalScaling.leaf` specifies the desired leaf replicas after scaling.
+
+Let's create the `SingleStoreOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/scaling/horizontal-scaling/cluster/example/sdbops-upscale.yaml
+singlestoreopsrequest.ops.kubedb.com/sdbops-scale-horizontal-up created
+```
+
+#### Verify Cluster replicas scaled up successfully
+
+If everything goes well, `KubeDB` Enterprise operator will update the replicas of `SingleStore` object and related `PetSets` and `Pods`.
+
+Let's wait for `SingleStoreOpsRequest` to be `Successful`. Run the following command to watch `SingleStoreOpsRequest` CR,
+
+```bash
+ $ kubectl get singlestoreopsrequest -n demo
+NAME TYPE STATUS AGE
+sdbops-scale-horizontal-up HorizontalScaling Successful 74s
+```
+
+We can see from the above output that the `SingleStoreOpsRequest` has succeeded. Now, we are going to verify the number of `leaf replicas` this database has from the SingleStore object, number of pods the `leaf petset` have,
+
+```bash
+$ kubectl get sdb -n demo sample-sdb -o json | jq '.spec.topology.leaf.replicas'
+3
+$ kubectl get petset -n demo sample-sdb-leaf -o=jsonpath='{.spec.replicas}{"\n"}'
+3
+
+```
+
+Now let's connect to a singlestore instance and run a memsqlctl internal command to check the number of replicas,
+
+```bash
+$ kubectl exec -it -n demo sample-sdb-aggregator-0 -- bash
+Defaulted container "singlestore" out of: singlestore, singlestore-coordinator, singlestore-init (init)
+[memsql@sample-sdb-aggregator-0 /]$ memsqlctl show-cluster
++---------------------+--------------------------------------------------+------+--------------------+-----------+-----------+--------+--------------------+------------------------------+--------+-------------------+
+| Role | Host | Port | Availability Group | Pair Host | Pair Port | State | Opened Connections | Average Roundtrip Latency ms | NodeId | Master Aggregator |
++---------------------+--------------------------------------------------+------+--------------------+-----------+-----------+--------+--------------------+------------------------------+--------+-------------------+
+| Leaf | sample-sdb-leaf-0.sample-sdb-pods.demo.svc | 3306 | 1 | null | null | online | 2 | | 2 | |
+| Leaf | sample-sdb-leaf-1.sample-sdb-pods.demo.svc | 3306 | 1 | null | null | online | 3 | | 3 | |
+| Leaf | sample-sdb-leaf-2.sample-sdb-pods.demo.svc | 3306 | 1 | null | null | online | 2 | | 4 | |
+| Aggregator (Leader) | sample-sdb-aggregator-0.sample-sdb-pods.demo.svc | 3306 | | null | null | online | 1 | null | 1 | 1 |
++---------------------+--------------------------------------------------+------+--------------------+-----------+-----------+--------+--------------------+------------------------------+--------+-------------------+
+
+```
+
+From all the above outputs we can see that the `leaf replicas` of the cluster is `3`. That means we have successfully scaled up the `leaf replicas` of the SingleStore Cluster.
+
+### Scale Down Replicas
+
+Here, we are going to scale down the `leaf replicas` of the cluster to meet the desired number of replicas after scaling.
+
+#### Create SingleStoreOpsRequest
+
+In order to scale down the cluster of the database, we have to create a `SingleStoreOpsRequest` CR with our desired replicas. Below is the YAML of the `SingleStoreOpsRequest` CR that we are going to create,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdbops-scale-horizontal-down
+ namespace: demo
+spec:
+ type: HorizontalScaling
+ databaseRef:
+ name: sample-sdb
+ horizontalScaling:
+ leaf: 2
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing horizontal scaling down operation on `sample-sdb` database.
+- `spec.type` specifies that we are performing `HorizontalScaling` on our database.
+- `spec.horizontalScaling.leaf` specifies the desired `leaf replicas` after scaling.
+
+Let's create the `SingleStoreOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/scaling/horizontal-scaling/cluster/example/sdbops-downscale.yaml
+singlestoreopsrequest.ops.kubedb.com/sdbops-scale-horizontal-down created
+```
+
+#### Verify Cluster replicas scaled down successfully
+
+If everything goes well, `KubeDB` Enterprise operator will update the replicas of `SingleStore` object and related `PetSets` and `Pods`.
+
+Let's wait for `SingleStoreOpsRequest` to be `Successful`. Run the following command to watch `SingleStoreOpsRequest` CR,
+
+```bash
+$ kubectl get singlestoreopsrequest -n demo
+NAME TYPE STATUS AGE
+sdbops-scale-horizontal-down HorizontalScaling Successful 63s
+```
+
+We can see from the above output that the `SingleStoreOpsRequest` has succeeded. Now, we are going to verify the number of `leaf replicas` this database has from the SingleStore object, number of pods the `leaf petset` have,
+
+```bash
+$ kubectl get sdb -n demo sample-sdb -o json | jq '.spec.topology.leaf.replicas'
+2
+$ kubectl get petset -n demo sample-sdb-leaf -o=jsonpath='{.spec.replicas}{"\n"}'
+2
+
+```
+
+Now let's connect to a singlestore instance and run a memsqlctl internal command to check the number of replicas,
+```bash
+$ kubectl exec -it -n demo sample-sdb-aggregator-0 -- bash
+Defaulted container "singlestore" out of: singlestore, singlestore-coordinator, singlestore-init (init)
+bash: mesqlctl: command not found
+[memsql@sample-sdb-aggregator-0 /]$ memsqlctl show-cluster
++---------------------+--------------------------------------------------+------+--------------------+-----------+-----------+--------+--------------------+------------------------------+--------+-------------------+
+| Role | Host | Port | Availability Group | Pair Host | Pair Port | State | Opened Connections | Average Roundtrip Latency ms | NodeId | Master Aggregator |
++---------------------+--------------------------------------------------+------+--------------------+-----------+-----------+--------+--------------------+------------------------------+--------+-------------------+
+| Leaf | sample-sdb-leaf-0.sample-sdb-pods.demo.svc | 3306 | 1 | null | null | online | 2 | | 2 | |
+| Leaf | sample-sdb-leaf-1.sample-sdb-pods.demo.svc | 3306 | 1 | null | null | online | 3 | | 3 | |
+| Aggregator (Leader) | sample-sdb-aggregator-0.sample-sdb-pods.demo.svc | 3306 | | null | null | online | 1 | null | 1 | 1 |
++---------------------+--------------------------------------------------+------+--------------------+-----------+-----------+--------+--------------------+------------------------------+--------+-------------------+
+
+```
+
+From all the above outputs we can see that the `leaf replicas` of the cluster is `2`. That means we have successfully scaled down the `leaf replicas` of the SingleStore database.
+
+## Cleaning Up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+```bash
+$ kubectl delete sdb -n demo sample-sdb
+$ kubectl delete singlestoreopsrequest -n demo sdbops-scale-horizontal-up sdbops-scale-horizontal-down
+```
\ No newline at end of file
diff --git a/docs/guides/singlestore/scaling/horizontal-scaling/overview/images/horizontal-scaling.svg b/docs/guides/singlestore/scaling/horizontal-scaling/overview/images/horizontal-scaling.svg
new file mode 100644
index 0000000000..6d09610348
--- /dev/null
+++ b/docs/guides/singlestore/scaling/horizontal-scaling/overview/images/horizontal-scaling.svg
@@ -0,0 +1,104 @@
+
diff --git a/docs/guides/singlestore/scaling/horizontal-scaling/overview/index.md b/docs/guides/singlestore/scaling/horizontal-scaling/overview/index.md
new file mode 100644
index 0000000000..ade92b2c09
--- /dev/null
+++ b/docs/guides/singlestore/scaling/horizontal-scaling/overview/index.md
@@ -0,0 +1,54 @@
+---
+title: SingleStore Horizontal Scaling Overview
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-scaling-horizontal-overview
+ name: Overview
+ parent: guides-sdb-scaling-horizontal
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# SingleStore Horizontal Scaling
+
+This guide will give an overview on how KubeDB Ops Manager scales up or down `SingleStore Cluster`.
+
+## Before You Begin
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [SingleStore](/docs/guides/singlestore/concepts/singlestore.md)
+ - [SingleStoreOpsRequest](/docs/guides/singlestore/concepts/opsrequest.md)
+
+## How Horizontal Scaling Process Works
+
+The following diagram shows how KubeDB Ops Manager scales up or down `SingleStore` database components. Open the image in a new tab to see the enlarged version.
+
+
+
+The Horizontal scaling process consists of the following steps:
+
+1. At first, a user creates a `SingleStore` Custom Resource (CR).
+
+2. `KubeDB` Provisioner operator watches the `SingleStore` CR.
+
+3. When the operator finds a `SingleStore` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc.
+
+4. Then, in order to scale the `SingleStore` database the user creates a `SingleStoreOpsRequest` CR with desired information.
+
+5. `KubeDB` Ops-manager operator watches the `SingleStoreOpsRequest` CR.
+
+6. When it finds a `SingleStoreOpsRequest` CR, it pauses the `SingleStore` object which is referred from the `SingleStoreOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `SingleStore` object during the horizontal scaling process.
+
+7. Then the `KubeDB` Ops-manager operator will scale the related PetSet Pods to reach the expected number of replicas defined in the `SingleStoreOpsRequest` CR.
+
+8. After the successfully scaling the replicas of the PetSet Pods, the `KubeDB` Ops-manager operator updates the number of replicas in the `SingleStore` object to reflect the updated state.
+
+9. After the successful scaling of the `SingleStore` replicas, the `KubeDB` Ops-manager operator resumes the `SingleStore` object so that the `KubeDB` Provisioner operator resumes its usual operations.
+
+In the next docs, we are going to show a step by step guide on horizontal scaling of SingleStore database using `SingleStoreOpsRequest` CRD.
diff --git a/docs/guides/singlestore/scaling/vertical-scaling/_index.md b/docs/guides/singlestore/scaling/vertical-scaling/_index.md
new file mode 100644
index 0000000000..7d88aa0a8d
--- /dev/null
+++ b/docs/guides/singlestore/scaling/vertical-scaling/_index.md
@@ -0,0 +1,10 @@
+---
+title: Vertical Scaling
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-scaling-vertical
+ name: Vertical Scaling
+ parent: guides-sdb-scaling
+ weight: 20
+menu_name: docs_{{ .version }}
+---
\ No newline at end of file
diff --git a/docs/guides/singlestore/scaling/vertical-scaling/cluster/example/sample-sdb.yaml b/docs/guides/singlestore/scaling/vertical-scaling/cluster/example/sample-sdb.yaml
new file mode 100644
index 0000000000..437685dccf
--- /dev/null
+++ b/docs/guides/singlestore/scaling/vertical-scaling/cluster/example/sample-sdb.yaml
@@ -0,0 +1,52 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sample-sdb
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
\ No newline at end of file
diff --git a/docs/guides/singlestore/scaling/vertical-scaling/cluster/example/sdbops-vscale.yaml b/docs/guides/singlestore/scaling/vertical-scaling/cluster/example/sdbops-vscale.yaml
new file mode 100644
index 0000000000..661a44385e
--- /dev/null
+++ b/docs/guides/singlestore/scaling/vertical-scaling/cluster/example/sdbops-vscale.yaml
@@ -0,0 +1,18 @@
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdbops-vscale
+ namespace: demo
+spec:
+ type: VerticalScaling
+ databaseRef:
+ name: sample-sdb
+ verticalScaling:
+ aggregator:
+ resources:
+ requests:
+ memory: "2500Mi"
+ cpu: "0.7"
+ limits:
+ memory: "2500Mi"
+ cpu: "0.7"
\ No newline at end of file
diff --git a/docs/guides/singlestore/scaling/vertical-scaling/cluster/index.md b/docs/guides/singlestore/scaling/vertical-scaling/cluster/index.md
new file mode 100644
index 0000000000..dab35d70ea
--- /dev/null
+++ b/docs/guides/singlestore/scaling/vertical-scaling/cluster/index.md
@@ -0,0 +1,226 @@
+---
+title: Vertical Scaling SingleStore Cluster
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-scaling-vertical-cluster
+ name: Vertical Scaling OpsRequest
+ parent: guides-sdb-scaling-vertical
+ weight: 30
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Vertical Scale SingleStore Cluster
+
+This guide will show you how to use `KubeDB` Enterprise operator to update the resources of a SingleStore cluster database.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Install `KubeDB` Community and Enterprise operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [SingleStore](/docs/guides/singlestore/concepts/singlestore.md)
+ - [Clustering](/docs/guides/singlestore/clustering/singlestore-clustering/)
+ - [SingleStoreOpsRequest](/docs/guides/singlestore/concepts/opsrequest.md)
+ - [Vertical Scaling Overview](/docs/guides/singlestore/scaling/vertical-scaling/overview/)
+
+To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial.
+
+```bash
+$ kubectl create ns demo
+namespace/demo created
+```
+
+## Apply Vertical Scaling on Cluster
+
+Here, we are going to deploy a `SingleStore` cluster using a supported version by `KubeDB` operator. Then we are going to apply vertical scaling on it.
+
+### Create SingleStore License Secret
+
+We need SingleStore License to create SingleStore Database. So, Ensure that you have acquired a license and then simply pass the license by secret.
+
+```bash
+$ kubectl create secret generic -n demo license-secret \
+ --from-literal=username=license \
+ --from-literal=password='your-license-set-here'
+secret/license-secret created
+```
+
+### Deploy SingleStore Cluster
+
+In this section, we are going to deploy a SingleStore cluster database. Then, in the next section we will update the resources of the database using `SingleStoreOpsRequest` CRD. Below is the YAML of the `SingleStore` CR that we are going to create,
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sample-sdb
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
+```
+
+Let's create the `SingleStore` CR we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/scaling/vertical-scaling/cluster/example/sample-sdb.yaml
+singlestore.kubedb.com/sample-sdb created
+```
+
+Now, wait until `sample-sdb` has status `Ready`. i.e,
+
+```bash
+$ kubectl get sdb -n demo
+NAME TYPE VERSION STATUS AGE
+sample-sdb kubedb.com/v1alpha2 8.7.10 Ready 101s
+```
+
+Let's check the Pod containers resources,
+
+```bash
+$ kubectl get pod -n demo sample-sdb-aggregator-0 -o json | jq '.spec.containers[].resources'
+{
+ "limits": {
+ "cpu": "600m",
+ "memory": "2Gi"
+ },
+ "requests": {
+ "cpu": "600m",
+ "memory": "2Gi"
+ }
+}
+
+```
+
+We are now ready to apply the `SingleStoreOpsRequest` CR to update the resources of this database.
+
+### Vertical Scaling
+
+Here, we are going to update the resources of the database to meet the desired resources after scaling.
+
+#### Create SingleStoreOpsRequest
+
+In order to update the resources of the database, we have to create a `SingleStoreOpsRequest` CR with our desired resources. Below is the YAML of the `SingleStoreOpsRequest` CR that we are going to create,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdbops-vscale
+ namespace: demo
+spec:
+ type: VerticalScaling
+ databaseRef:
+ name: sample-sdb
+ verticalScaling:
+ aggregator:
+ resources:
+ requests:
+ memory: "2500Mi"
+ cpu: "0.7"
+ limits:
+ memory: "2500Mi"
+ cpu: "0.7"
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `sample-sdb` database.
+- `spec.type` specifies that we are performing `VerticalScaling` on our database.
+- `spec.VerticalScaling.aggregator` specifies the desired `aggregator` nodes resources after scaling. As well you can scale resources for leaf node, standalone node and coordinator container.
+
+Let's create the `SingleStoreOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/scaling/vertical-scaling/cluster/example/sdbops-vscale.yaml
+singlestoreopsrequest.ops.kubedb.com/sdbops-vscale created
+```
+
+#### Verify SingleStore Cluster resources updated successfully
+
+If everything goes well, `KubeDB` Enterprise operator will update the resources of `SingleStore` object and related `PetSets` and `Pods`.
+
+Let's wait for `SingleStoreOpsRequest` to be `Successful`. Run the following command to watch `SingleStoreOpsRequest` CR,
+
+```bash
+$ kubectl get singlestoreopsrequest -n demo
+NAME TYPE STATUS AGE
+sdbops-vscale VerticalScaling Successful 7m30s
+```
+
+We can see from the above output that the `SingleStoreOpsRequest` has succeeded. Now, we are going to verify from one of the Pod yaml whether the resources of the database has updated to meet up the desired state, Let's check,
+
+```bash
+$ kubectl get pod -n demo sample-sdb-aggregator-0 -o json | jq '.spec.containers[].resources'
+{
+ "limits": {
+ "cpu": "700m",
+ "memory": "2500Mi"
+ },
+ "requests": {
+ "cpu": "700m",
+ "memory": "2500Mi"
+ }
+}
+
+```
+
+The above output verifies that we have successfully scaled up the resources of the SingleStore database.
+
+## Cleaning Up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+```bash
+$ kubectl delete sdb -n demo sample-sdb
+$ kubectl delete singlestoreopsrequest -n demo sdbops-vscale
+```
\ No newline at end of file
diff --git a/docs/guides/singlestore/scaling/vertical-scaling/overview/images/vertical-sacling.svg b/docs/guides/singlestore/scaling/vertical-scaling/overview/images/vertical-sacling.svg
new file mode 100644
index 0000000000..8f0e835339
--- /dev/null
+++ b/docs/guides/singlestore/scaling/vertical-scaling/overview/images/vertical-sacling.svg
@@ -0,0 +1,109 @@
+
diff --git a/docs/guides/singlestore/scaling/vertical-scaling/overview/index.md b/docs/guides/singlestore/scaling/vertical-scaling/overview/index.md
new file mode 100644
index 0000000000..ae12482b8c
--- /dev/null
+++ b/docs/guides/singlestore/scaling/vertical-scaling/overview/index.md
@@ -0,0 +1,52 @@
+---
+title: SingleStore Vertical Scaling Overview
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-scaling-vertical-overview
+ name: Overview
+ parent: guides-sdb-scaling-vertical
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# SingleStore Vertical Scaling
+
+This guide will give an overview on how KubeDB Ops Manager vertically scales up `SingleStore`.
+
+## Before You Begin
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [SingleStore](/docs/guides/singlestore/concepts/singlestore.md)
+ - [SingleStoreOpsRequest](/docs/guides/singlestore/concepts/opsrequest.md)
+
+The following diagram shows how KubeDB Ops Manager scales up or down `SingleStore` database components. Open the image in a new tab to see the enlarged version.
+
+
+
+The vertical scaling process consists of the following steps:
+
+1. At first, a user creates a `SingleStore` Custom Resource (CR).
+
+2. `KubeDB` Provisioner operator watches the `SingleStore` CR.
+
+3. When the operator finds a `SingleStore` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc.
+
+4. Then, in order to update the resources(for example `CPU`, `Memory` etc.) of the `SingleStore` database the user creates a `SingleStoreOpsRequest` CR with desired information.
+
+5. `KubeDB` Ops-manager operator watches the `SingleStoreOpsRequest` CR.
+
+6. When it finds a `SingleStoreOpsRequest` CR, it halts the `SingleStore` object which is referred from the `SingleStoreOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `SingleStore` object during the vertical scaling process.
+
+7. Then the `KubeDB` Ops-manager operator will update resources of the PetSet Pods to reach desired state.
+
+8. After the successful update of the resources of the PetSet's replica, the `KubeDB` Ops-manager operator updates the `SingleStore` object to reflect the updated state.
+
+9. After the successful update of the `SingleStore` resources, the `KubeDB` Ops-manager operator resumes the `SingleStore` object so that the `KubeDB` Provisioner operator resumes its usual operations.
+
+In the next docs, we are going to show a step by step guide on updating resources of SingleStore database using `SingleStoreOpsRequest` CRD.
\ No newline at end of file
diff --git a/docs/guides/singlestore/tls/_index.md b/docs/guides/singlestore/tls/_index.md
new file mode 100644
index 0000000000..9503da0132
--- /dev/null
+++ b/docs/guides/singlestore/tls/_index.md
@@ -0,0 +1,10 @@
+---
+title: TLS/SSL Encryption
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-tls
+ name: TLS/SSL Encryption
+ parent: guides-singlestore
+ weight: 45
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/singlestore/tls/configure/examples/issuer.yaml b/docs/guides/singlestore/tls/configure/examples/issuer.yaml
new file mode 100644
index 0000000000..8ffb97a846
--- /dev/null
+++ b/docs/guides/singlestore/tls/configure/examples/issuer.yaml
@@ -0,0 +1,8 @@
+apiVersion: cert-manager.io/v1
+kind: Issuer
+metadata:
+ name: sdb-issuer
+ namespace: demo
+spec:
+ ca:
+ secretName: sdb-ca
\ No newline at end of file
diff --git a/docs/guides/singlestore/tls/configure/examples/tls-cluster.yaml b/docs/guides/singlestore/tls/configure/examples/tls-cluster.yaml
new file mode 100644
index 0000000000..49a24c692e
--- /dev/null
+++ b/docs/guides/singlestore/tls/configure/examples/tls-cluster.yaml
@@ -0,0 +1,66 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sdb-tls
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "700m"
+ requests:
+ memory: "2Gi"
+ cpu: "700m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "700m"
+ requests:
+ memory: "2Gi"
+ cpu: "700m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ deletionPolicy: WipeOut
+ tls:
+ issuerRef:
+ apiGroup: cert-manager.io
+ kind: Issuer
+ name: sdb-issuer
+ certificates:
+ - alias: server
+ subject:
+ organizations:
+ - kubedb:server
+ dnsNames:
+ - localhost
+ ipAddresses:
+ - "127.0.0.1"
+
diff --git a/docs/guides/singlestore/tls/configure/index.md b/docs/guides/singlestore/tls/configure/index.md
new file mode 100644
index 0000000000..a1a555cd8c
--- /dev/null
+++ b/docs/guides/singlestore/tls/configure/index.md
@@ -0,0 +1,334 @@
+---
+title: TLS/SSL (Transport Encryption)
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-tls-configure
+ name: SingleStore TLS/SSL Configuration
+ parent: guides-sdb-tls
+ weight: 20
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Configure TLS/SSL in SingleStore
+
+`KubeDB` supports providing TLS/SSL encryption (via, `tls` mode) for `SingleStore`. This tutorial will show you how to use `KubeDB` to deploy a `SingleStore` database with TLS/SSL configuration.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.0.0 or later to your cluster to manage your SSL/TLS certificates.
+
+- Install `KubeDB` in your cluster following the steps [here](/docs/setup/README.md).
+
+- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial.
+
+ ```bash
+ $ kubectl create ns demo
+ namespace/demo created
+ ```
+
+> Note: YAML files used in this tutorial are stored in [docs/guides/singlestore/tls/configure/examples](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/singlestore/tls/configure/examples) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
+
+### Deploy SingleStore database with TLS/SSL configuration
+
+As pre-requisite, at first, we are going to create an Issuer/ClusterIssuer. This Issuer/ClusterIssuer is used to create certificates. Then we are going to deploy a SingleStore standalone and cluster that will be configured with these certificates by `KubeDB` operator.
+
+### Create SingleStore License Secret
+
+We need SingleStore License to create SingleStore Database. So, Ensure that you have acquired a license and then simply pass the license by secret.
+
+```bash
+$ kubectl create secret generic -n demo license-secret \
+ --from-literal=username=license \
+ --from-literal=password='your-license-set-here'
+secret/license-secret created
+```
+
+### Create Issuer/ClusterIssuer
+
+Now, we are going to create an example `Issuer` that will be used throughout the duration of this tutorial. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. By following the below steps, we are going to create our desired issuer,
+
+- Start off by generating our ca-certificates using openssl,
+
+```bash
+$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=memsql/O=kubedb"
+Generating a RSA private key
+...........................................................................+++++
+........................................................................................................+++++
+writing new private key to './ca.key'
+```
+
+- create a secret using the certificate files we have just generated,
+
+```bash
+kubectl create secret tls sdb-ca \
+ --cert=ca.crt \
+ --key=ca.key \
+ --namespace=demo
+secret/sdb-ca created
+```
+
+Now, we are going to create an `Issuer` using the `sdb-ca` secret that hols the ca-certificate we have just created. Below is the YAML of the `Issuer` cr that we are going to create,
+
+```yaml
+apiVersion: cert-manager.io/v1
+kind: Issuer
+metadata:
+ name: sdb-issuer
+ namespace: demo
+spec:
+ ca:
+ secretName: sdb-ca
+```
+
+Let’s create the `Issuer` cr we have shown above,
+
+```bash
+kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/tls/configure/examples/issuer.yaml
+issuer.cert-manager.io/sdb-issuer created
+```
+
+### Deploy SingleStore Cluster with TLS/SSL configuration
+
+Here, our issuer `sdb-issuer` is ready to deploy a `SingleStore` cluster with TLS/SSL configuration. Below is the YAML for SingleStore Cluster that we are going to create,
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sdb-tls
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "700m"
+ requests:
+ memory: "2Gi"
+ cpu: "700m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "700m"
+ requests:
+ memory: "2Gi"
+ cpu: "700m"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ deletionPolicy: WipeOut
+ tls:
+ issuerRef:
+ apiGroup: cert-manager.io
+ kind: Issuer
+ name: sdb-issuer
+ certificates:
+ - alias: server
+ subject:
+ organizations:
+ - kubedb:server
+ dnsNames:
+ - localhost
+ ipAddresses:
+ - "127.0.0.1"
+```
+
+Here,
+
+- `spec.tls.issuerRef` refers to the `sdb-issuer` issuer.
+
+- `spec.tls.certificates` gives you a lot of options to configure so that the certificate will be renewed and kept up to date.
+You can found more details from [here](/docs/guides/singlestore/concepts/singlestore.md#spectls)
+
+Let’s create the `SingleStore` cr we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/tls/configure/examples/tls-cluster.yaml
+singlestore.kubedb.com/sdb-tls created
+```
+
+**Wait for the database to be ready:**
+
+Now, wait for `SingleStore` going on `Running` state and also wait for `PetSet` and its pod to be created and going to `Running` state,
+
+```bash
+$ kubectl get sdb,petset -n demo
+NAME TYPE VERSION STATUS AGE
+singlestore.kubedb.com/sdb-tls kubedb.com/v1alpha2 8.7.10 Ready 3m57s
+
+NAME AGE
+petset.apps.k8s.appscode.com/sdb-tls-aggregator 3m53s
+petset.apps.k8s.appscode.com/sdb-tls-leaf 3m50s
+```
+
+**Verify tls-secrets created successfully:**
+
+If everything goes well, you can see that our tls-secrets will be created which contains server, client, exporter certificate. Server tls-secret will be used for server configuration and client tls-secret will be used for a secure connection.
+
+All tls-secret are created by `KubeDB` Ops Manager. Default tls-secret name formed as _{singlestore-object-name}-{cert-alias}-cert_.
+
+Let's check the tls-secrets have created,
+
+```bash
+$ kubectl get secret -n demo | grep sdb-tls
+sdb-tls-client-cert kubernetes.io/tls 3 5m41s
+sdb-tls-root-cred kubernetes.io/basic-auth 2 5m41s
+sdb-tls-server-cert kubernetes.io/tls
+```
+
+**Verify SingleStore configured with TLS/SSL:**
+
+Now, we are going to connect to the database for verifying the `SingleStore` server has configured with TLS/SSL encryption.
+
+Let's exec into the pod to verify TLS/SSL configuration,
+
+```bash
+$ kubectl exec -it -n demo sdb-tls-aggregator-0 -- bash
+Defaulted container "singlestore" out of: singlestore, singlestore-coordinator, singlestore-init (init)
+
+[memsql@sdb-tls-aggregator-0 /]$ ls etc/memsql/certs
+ca.crt client.crt client.key server.crt server.key
+
+[memsql@sdb-tls-aggregator-0 /]$ memsql -uroot -p$ROOT_PASSWORD
+singlestore-client: [Warning] Using a password on the command line interface can be insecure.
+Welcome to the MySQL monitor. Commands end with ; or \g.
+Your MySQL connection id is 237
+Server version: 5.7.32 SingleStoreDB source distribution (compatible; MySQL Enterprise & MySQL Commercial)
+
+Copyright (c) 2000, 2022, Oracle and/or its affiliates.
+
+Oracle is a registered trademark of Oracle Corporation and/or its
+affiliates. Other names may be trademarks of their respective
+owners.
+
+Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
+
+singlestore> show variables like '%ssl%';
++---------------------------------+------------------------------+
+| Variable_name | Value |
++---------------------------------+------------------------------+
+| default_user_require_ssl | OFF |
+| exporter_ssl_ca | |
+| exporter_ssl_capath | |
+| exporter_ssl_cert | |
+| exporter_ssl_key | |
+| exporter_ssl_key_passphrase | [redacted] |
+| have_openssl | ON |
+| have_ssl | ON |
+| jwks_ssl_ca_certificate | |
+| node_replication_ssl_only | OFF |
+| openssl_version | 805306480 |
+| processlist_rpc_json_max_size | 2048 |
+| ssl_ca | /etc/memsql/certs/ca.crt |
+| ssl_capath | |
+| ssl_cert | /etc/memsql/certs/server.crt |
+| ssl_cipher | |
+| ssl_fips_mode | OFF |
+| ssl_key | /etc/memsql/certs/server.key |
+| ssl_key_passphrase | [redacted] |
+| ssl_last_reload_attempt_time | |
+| ssl_last_successful_reload_time | |
++---------------------------------+------------------------------+
+21 rows in set (0.00 sec)
+singlestore> exit
+Bye
+
+```
+
+The above output shows that the `SingleStore` server is configured to TLS/SSL. You can also see that the `.crt` and `.key` files are stored in `/etc/mysql/certs/` directory for client and server respectively.
+
+**Verify secure connection for SSL required user:**
+
+Now, you can create an SSL required user that will be used to connect to the database with a secure connection.
+
+Let's connect to the database server with a secure connection,
+
+```bash
+$ kubectl exec -it -n demo sdb-tls-aggregator-0 -- bash
+Defaulted container "singlestore" out of: singlestore, singlestore-coordinator, singlestore-init (init)
+[memsql@sdb-tls-aggregator-0 /]$ memsql -uroot -p$ROOT_PASSWORD
+singlestore-client: [Warning] Using a password on the command line interface can be insecure.
+Welcome to the MySQL monitor. Commands end with ; or \g.
+Your MySQL connection id is 316
+Server version: 5.7.32 SingleStoreDB source distribution (compatible; MySQL Enterprise & MySQL Commercial)
+
+Copyright (c) 2000, 2022, Oracle and/or its affiliates.
+
+Oracle is a registered trademark of Oracle Corporation and/or its
+affiliates. Other names may be trademarks of their respective
+owners.
+
+Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
+
+singlestore> CREATE USER 'new_user'@'localhost' IDENTIFIED BY '1234' REQUIRE SSL;
+Query OK, 0 rows affected (0.05 sec)
+
+singlestore> FLUSH PRIVILEGES;
+Query OK, 0 rows affected (0.00 sec)
+
+singlestore> exit
+Bye
+
+# accessing the database server newly created user with certificates
+[memsql@sdb-tls-aggregator-0 /]$ memsql -unew_user -p1234 --ssl-ca=/etc/memsql/certs/ca.crt --ssl-cert=/etc/memsql/certs/server.crt --ssl-key=/etc/memsql/certs/server.key
+singlestore-client: [Warning] Using a password on the command line interface can be insecure.
+Welcome to the MySQL monitor. Commands end with ; or \g.
+Your MySQL connection id is 462
+Server version: 5.7.32 SingleStoreDB source distribution (compatible; MySQL Enterprise & MySQL Commercial)
+
+Copyright (c) 2000, 2022, Oracle and/or its affiliates.
+
+Oracle is a registered trademark of Oracle Corporation and/or its
+affiliates. Other names may be trademarks of their respective
+owners.
+
+Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
+
+singlestore> exit;
+Bye
+
+```
+
+From the above output, you can see that only using client certificate we can access the database securely, otherwise, it shows "Access denied". Our client certificate is stored in `/etc/memsql/certs/` directory.
+
+## Cleaning up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+```bash
+$ kubectl delete sdb demo sdb-tls
+singlestore.kubedb.com "sdb-tls" deleted
+$ kubectl delete ns demo
+namespace "demo" deleted
+```
\ No newline at end of file
diff --git a/docs/guides/singlestore/tls/overview/images/sdb-tls.svg b/docs/guides/singlestore/tls/overview/images/sdb-tls.svg
new file mode 100644
index 0000000000..18c72067a9
--- /dev/null
+++ b/docs/guides/singlestore/tls/overview/images/sdb-tls.svg
@@ -0,0 +1,116 @@
+
diff --git a/docs/guides/singlestore/tls/overview/index.md b/docs/guides/singlestore/tls/overview/index.md
new file mode 100644
index 0000000000..0da1189929
--- /dev/null
+++ b/docs/guides/singlestore/tls/overview/index.md
@@ -0,0 +1,69 @@
+---
+title: SingleStore TLS/SSL Encryption Overview
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-tls-overview
+ name: Overview
+ parent: guides-sdb-tls
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# SingleStore TLS/SSL Encryption
+
+**Prerequisite :** To configure TLS/SSL in `SingleStore`, `KubeDB` uses `cert-manager` to issue certificates. So first you have to make sure that the cluster has `cert-manager` installed. To install `cert-manager` in your cluster following steps [here](https://cert-manager.io/docs/installation/kubernetes/).
+
+To issue a certificate, the following cr of `cert-manager` is used:
+
+- `Issuer/ClusterIssuer`: Issuers and ClusterIssuers represent certificate authorities (CAs) that are able to generate signed certificates by honoring certificate signing requests. All cert-manager certificates require a referenced issuer that is in a ready condition to attempt to honor the request. You can learn more details [here](https://cert-manager.io/docs/concepts/issuer/).
+
+- `Certificate`: `cert-manager` has the concept of Certificates that define the desired x509 certificate which will be renewed and kept up to date. You can learn more details [here](https://cert-manager.io/docs/concepts/certificate/).
+
+**SingleStore CRD Specification:**
+
+KubeDB uses the following cr fields to enable SSL/TLS encryption in `SingleStore`.
+
+- `spec:`
+ - `tls:`
+ - `issuerRef`
+ - `certificates`
+
+Read about the fields in details from [singlestore concept](/docs/guides/singlestore/concepts/singlestore.md#spectls),
+
+`KubeDB` uses the `issuer` or `clusterIssuer` referenced in the `tls.issuerRef` field, and the certificate specs provided in `tls.certificate` to generate certificate secrets using `Issuer/ClusterIssuers` specification. These certificates secrets including `ca.crt`, `tls.crt` and `tls.key` etc. are used to configure `SingleStore` server, studio, exporter etc. respectively.
+
+## How TLS/SSL configures in SingleStore
+
+The following figure shows how `KubeDB` enterprise is used to configure TLS/SSL in SingleStore. Open the image in a new tab to see the enlarged version.
+
+
+
+Deploying SingleStore with TLS/SSL configuration process consists of the following steps:
+
+1. At first, a user creates an `Issuer/ClusterIssuer` cr.
+
+2. Then the user creates a `SingleStore` cr.
+
+3. `KubeDB` Provisioner operator watches for the `SingleStore` cr.
+
+4. When it finds one, it creates `Secret`, `Service`, etc. for the `SingleStore` database.
+
+5. `KubeDB` Ops Manager watches for `SingleStore`(5c), `Issuer/ClusterIssuer`(5b), `Secret` and `Service`(5a).
+
+6. When it finds all the resources(`SingleStore`, `Issuer/ClusterIssuer`, `Secret`, `Service`), it creates `Certificates` by using `tls.issuerRef` and `tls.certificates` field specification from `SingleStore` cr.
+
+7. `cert-manager` watches for certificates.
+
+8. When it finds one, it creates certificate secrets `tls-secrets`(server, client, secrets, etc.) that hold the actual self-signed certificate.
+
+9. `KubeDB` Provisioner operator watches for the Certificate secrets `tls-secrets`.
+
+10. When it finds all the tls-secret, it creates a `PetSet` so that SingleStore server is configured with TLS/SSL.
+
+In the next doc, we are going to show a step by step guide on how to configure a `SingleStore` database with TLS/SSL.
diff --git a/docs/guides/singlestore/update-version/_index.md b/docs/guides/singlestore/update-version/_index.md
new file mode 100644
index 0000000000..cfa345ebab
--- /dev/null
+++ b/docs/guides/singlestore/update-version/_index.md
@@ -0,0 +1,10 @@
+---
+title: Updating
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-updating
+ name: UpdateVersion
+ parent: guides-singlestore
+ weight: 42
+menu_name: docs_{{ .version }}
+---
diff --git a/docs/guides/singlestore/update-version/overview/images/sdb-version-update.svg b/docs/guides/singlestore/update-version/overview/images/sdb-version-update.svg
new file mode 100644
index 0000000000..26d68cac6b
--- /dev/null
+++ b/docs/guides/singlestore/update-version/overview/images/sdb-version-update.svg
@@ -0,0 +1,104 @@
+
diff --git a/docs/guides/singlestore/update-version/overview/index.md b/docs/guides/singlestore/update-version/overview/index.md
new file mode 100644
index 0000000000..80c06027a0
--- /dev/null
+++ b/docs/guides/singlestore/update-version/overview/index.md
@@ -0,0 +1,54 @@
+---
+title: Updating SingleStore Overview
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-updating-overview
+ name: Overview
+ parent: guides-sdb-updating
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# updating SingleStore version Overview
+
+This guide will give you an overview on how KubeDB Ops Manager update the version of `SingleStore` database.
+
+## Before You Begin
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [SingleStore](/docs/guides/singlestore/concepts/singlestore.md)
+ - [SingleStoreOpsRequest](/docs/guides/singlestore/concepts/opsrequest.md)
+
+## How update version Process Works
+
+The following diagram shows how KubeDB Ops Manager used to update the version of `SingleStore`. Open the image in a new tab to see the enlarged version.
+
+
+
+The updating process consists of the following steps:
+
+1. At first, a user creates a `SingleStore` Custom Resource (CR).
+
+2. `KubeDB` Provisioner operator watches the `SingleStore` CR.
+
+3. When the operator finds a `SingleStore` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc.
+
+4. Then, in order to update the version of the `SingleStore` database the user creates a `SingleStoreOpsRequest` CR with the desired version.
+
+5. `KubeDB` Ops-manager operator watches the `SingleStoreOpsRequest` CR.
+
+6. When it finds a `SingleStoreOpsRequest` CR, it halts the `SingleStore` object which is referred from the `SingleStoreOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `SingleStore` object during the updating process.
+
+7. By looking at the target version from `SingleStoreOpsRequest` CR, `KubeDB` Ops-manager operator updates the images of all the `PetSets`. After each image update, the operator performs some checks such as if the oplog is synced and database size is almost same or not.
+
+8. After successfully updating the `PetSets` and their `Pods` images, the `KubeDB` Ops-manager operator updates the image of the `SingleStore` object to reflect the updated state of the database.
+
+9. After successfully updating of `SingleStore` object, the `KubeDB` Ops-manager operator resumes the `SingleStore` object so that the `KubeDB` Provisioner operator can resume its usual operations.
+
+In the next doc, we are going to show a step by step guide on updating of a SingleStore database using update operation.
\ No newline at end of file
diff --git a/docs/guides/singlestore/update-version/sdb update-version opsrequest/examples/sample-sdb.yaml b/docs/guides/singlestore/update-version/sdb update-version opsrequest/examples/sample-sdb.yaml
new file mode 100644
index 0000000000..5efd9bf8dc
--- /dev/null
+++ b/docs/guides/singlestore/update-version/sdb update-version opsrequest/examples/sample-sdb.yaml
@@ -0,0 +1,52 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sample-sdb
+ namespace: demo
+spec:
+ version: "8.5.30"
+ topology:
+ aggregator:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
\ No newline at end of file
diff --git a/docs/guides/singlestore/update-version/sdb update-version opsrequest/examples/sdbops-update.yaml b/docs/guides/singlestore/update-version/sdb update-version opsrequest/examples/sdbops-update.yaml
new file mode 100644
index 0000000000..0f4ab34f3a
--- /dev/null
+++ b/docs/guides/singlestore/update-version/sdb update-version opsrequest/examples/sdbops-update.yaml
@@ -0,0 +1,11 @@
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdb-update-patch
+ namespace: demo
+spec:
+ type: UpdateVersion
+ databaseRef:
+ name: sample-sdb
+ updateVersion:
+ targetVersion: "8.7.10"
\ No newline at end of file
diff --git a/docs/guides/singlestore/update-version/sdb update-version opsrequest/index.md b/docs/guides/singlestore/update-version/sdb update-version opsrequest/index.md
new file mode 100644
index 0000000000..4593512230
--- /dev/null
+++ b/docs/guides/singlestore/update-version/sdb update-version opsrequest/index.md
@@ -0,0 +1,207 @@
+---
+title: Updating SingleStore Cluster
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-updating-cluster
+ name: Update Version OpsRequest
+ parent: guides-sdb-updating
+ weight: 30
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# update version of SingleStore Cluster
+
+This guide will show you how to use `KubeDB` Ops-manager operator to update the version of `SingleStore` Cluster.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Install `KubeDB` Community and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [SingleStore](/docs/guides/singlestore/concepts/singlestore.md)
+ - [Cluster](/docs/guides/singlestore/clustering/overview/)
+ - [SingleStoreOpsRequest](/docs/guides/singlestore/concepts/opsrequest.md)
+ - [Updating Overview](/docs/guides/singlestore/update-version/overview/)
+
+To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial.
+
+```bash
+$ kubectl create ns demo
+namespace/demo created
+```
+
+## Prepare SingleStore Cluster
+
+### Create SingleStore License Secret
+
+We need SingleStore License to create SingleStore Database. So, Ensure that you have acquired a license and then simply pass the license by secret.
+
+```bash
+$ kubectl create secret generic -n demo license-secret \
+ --from-literal=username=license \
+ --from-literal=password='your-license-set-here'
+secret/license-secret created
+```
+
+Now, we are going to deploy a `SingleStore` cluster database with version `8.5.30`.
+
+### Deploy SingleStore cluster
+
+In this section, we are going to deploy a SingleStore Cluster. Then, in the next section we will update the version of the database using `SingleStoreOpsRequest` CRD. Below is the YAML of the `SingleStore` CR that we are going to create,
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sample-sdb
+ namespace: demo
+spec:
+ version: "8.5.30"
+ topology:
+ aggregator:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
+
+```
+
+Let's create the `SingleStore` CR we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/update-version/cluster/examples/sample-sdb.yaml
+singlestore.kubedb.com/sample-sdb created
+```
+
+Now, wait until `sample-sdb` created has status `Ready`. i.e,
+
+```bash
+$ kubectl get sdb -n demo
+NAME TYPE VERSION STATUS AGE
+sample-sdb kubedb.com/v1alpha2 8.5.30 Ready 4m37s
+```
+
+We are now ready to apply the `SingleStoreOpsRequest` CR to update this database.
+
+### update SingleStore Version
+
+Here, we are going to update `SingleStore` cluster from `8.5.30` to `8.7.10`.
+
+#### Create SingleStoreOpsRequest:
+
+In order to update the database cluster, we have to create a `SingleStoreOpsRequest` CR with your desired version that is supported by `KubeDB`. Below is the YAML of the `SingleStoreOpsRequest` CR that we are going to create,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdb-update-patch
+ namespace: demo
+spec:
+ type: UpdateVersion
+ databaseRef:
+ name: sample-sdb
+ updateVersion:
+ targetVersion: "8.7.10"
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing operation on `sample-sdb` SingleStore database.
+- `spec.type` specifies that we are going to perform `UpdateVersion` on our database.
+- `spec.updateVersion.targetVersion` specifies the expected version of the database `8.7.10`.
+
+Let's create the `SingleStoreOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/update-version/cluster/examples/sdbops-update.yaml
+singlestoreopsrequest.ops.kubedb.com/sdb-update-patch created
+```
+
+#### Verify SingleStore version updated successfully
+
+If everything goes well, `KubeDB` Ops-manager operator will update the image of `SingleStore` object and related `PetSets` and `Pods`.
+
+Let's wait for `SingleStoreOpsRequest` to be `Successful`. Run the following command to watch `SingleStoreOpsRequest` CR,
+
+```bash
+$ kubectl get sdbops -n demo
+NAME TYPE STATUS AGE
+sdb-update-patch UpdateVersion Successful 3m46s
+```
+
+We can see from the above output that the `SingleStoreOpsRequest` has succeeded.
+
+Now, we are going to verify whether the `SingleStore` and the related `PetSets` and their `Pods` have the new version image. Let's check,
+
+```bash
+$ kubectl get sdb -n demo sample-sdb -o=jsonpath='{.spec.version}{"\n"}'
+8.7.10
+
+$ kubectl get petset -n demo sample-sdb-aggregator -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}'
+ghcr.io/appscode-images/singlestore-node:alma-8.7.10-95e2357384
+
+$ kubectl get petset -n demo sample-sdb-leaf -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}'
+ghcr.io/appscode-images/singlestore-node:alma-8.7.10-95e2357384
+
+$ kubectl get pods -n demo sample-sdb-aggregator-0 -o=jsonpath='{.spec.containers[0].image}{"\n"}'
+ghcr.io/appscode-images/singlestore-node:alma-8.7.10-95e2357384
+
+$ kubectl get pods -n demo sample-sdb-leaf-0 -o=jsonpath='{.spec.containers[0].image}{"\n"}'
+ghcr.io/appscode-images/singlestore-node:alma-8.7.10-95e2357384
+```
+
+You can see from above, our `SingleStore` cluster database has been updated with the new version. So, the update process is successfully completed.
+
+## Cleaning Up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+```bash
+$ kubectl delete sdb -n demo sample-sdb
+$ kubectl delete singlestoreopsrequest -n demo sdb-update-patch
+```
\ No newline at end of file
diff --git a/docs/guides/singlestore/volume-expansion/_index.md b/docs/guides/singlestore/volume-expansion/_index.md
new file mode 100644
index 0000000000..a171ea223c
--- /dev/null
+++ b/docs/guides/singlestore/volume-expansion/_index.md
@@ -0,0 +1,10 @@
+---
+title: Volume Expansion
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-volume-expansion
+ name: Volume Expansion
+ parent: guides-singlestore
+ weight: 44
+menu_name: docs_{{ .version }}
+---
\ No newline at end of file
diff --git a/docs/guides/singlestore/volume-expansion/overview/images/volume-expansion.svg b/docs/guides/singlestore/volume-expansion/overview/images/volume-expansion.svg
new file mode 100644
index 0000000000..553336d631
--- /dev/null
+++ b/docs/guides/singlestore/volume-expansion/overview/images/volume-expansion.svg
@@ -0,0 +1,144 @@
+
diff --git a/docs/guides/singlestore/volume-expansion/overview/index.md b/docs/guides/singlestore/volume-expansion/overview/index.md
new file mode 100644
index 0000000000..56abd1e431
--- /dev/null
+++ b/docs/guides/singlestore/volume-expansion/overview/index.md
@@ -0,0 +1,56 @@
+---
+title: SingleStore Volume Expansion Overview
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-volume-expansion-overview
+ name: Overview
+ parent: guides-sdb-volume-expansion
+ weight: 10
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# SingleStore Volume Expansion
+
+This guide will give an overview on how KubeDB Ops Manager expand the volume of `SingleStore`.
+
+## Before You Begin
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [SingleStore](/docs/guides/singlestore/concepts/singlestore.md)
+ - [SingleStoreOpsRequest](/docs/guides/singlestore/concepts/opsrequest.md)
+
+## How Volume Expansion Process Works
+
+The following diagram shows how KubeDB Ops Manager expand the volumes of `SingleStore` database components. Open the image in a new tab to see the enlarged version.
+
+
+
+The Volume Expansion process consists of the following steps:
+
+1. At first, a user creates a `SingleStore` Custom Resource (CR).
+
+2. `KubeDB` Provisioner operator watches the `SingleStore` CR.
+
+3. When the operator finds a `SingleStore` CR, it creates required `PetSet` and related necessary stuff like secrets, services, etc.
+
+4. The petSet creates Persistent Volumes according to the Volume Claim Template provided in the petset configuration. This Persistent Volume will be expanded by the `KubeDB` Ops-manager operator.
+
+5. Then, in order to expand the volume of the `SingleStore` database the user creates a `SingleStoreOpsRequest` CR with desired information.
+
+6. `KubeDB` Ops-manager operator watches the `SingleStoreOpsRequest` CR.
+
+7. When it finds a `SingleStoreOpsRequest` CR, it pauses the `SingleStore` object which is referred from the `SingleStoreOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `SingleStore` object during the volume expansion process.
+
+8. Then the `KubeDB` Ops-manager operator will expand the persistent volume to reach the expected size defined in the `SingleStoreOpsRequest` CR.
+
+9. After the successfully expansion of the volume of the related PetSet Pods, the `KubeDB` Ops-manager operator updates the new volume size in the `SingleStore` object to reflect the updated state.
+
+10. After the successful Volume Expansion of the `SingleStore`, the `KubeDB` Ops-manager operator resumes the `SingleStore` object so that the `KubeDB` Provisioner operator resumes its usual operations.
+
+In the next docs, we are going to show a step by step guide on Volume Expansion of various SingleStore database using `SingleStoreOpsRequest` CRD.
diff --git a/docs/guides/singlestore/volume-expansion/sdb volume-expansion opsrequest/example/sample-sdb.yaml b/docs/guides/singlestore/volume-expansion/sdb volume-expansion opsrequest/example/sample-sdb.yaml
new file mode 100644
index 0000000000..df3fdd1cde
--- /dev/null
+++ b/docs/guides/singlestore/volume-expansion/sdb volume-expansion opsrequest/example/sample-sdb.yaml
@@ -0,0 +1,53 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sample-sdb
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
+
diff --git a/docs/guides/singlestore/volume-expansion/sdb volume-expansion opsrequest/example/sdb-offline-volume-expansion.yaml b/docs/guides/singlestore/volume-expansion/sdb volume-expansion opsrequest/example/sdb-offline-volume-expansion.yaml
new file mode 100644
index 0000000000..39b1598a8d
--- /dev/null
+++ b/docs/guides/singlestore/volume-expansion/sdb volume-expansion opsrequest/example/sdb-offline-volume-expansion.yaml
@@ -0,0 +1,13 @@
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdb-offline-vol-expansion
+ namespace: demo
+spec:
+ type: VolumeExpansion
+ databaseRef:
+ name: sample-sdb
+ volumeExpansion:
+ mode: "Offline"
+ aggregator: 2Gi
+ leaf: 11Gi
\ No newline at end of file
diff --git a/docs/guides/singlestore/volume-expansion/sdb volume-expansion opsrequest/index.md b/docs/guides/singlestore/volume-expansion/sdb volume-expansion opsrequest/index.md
new file mode 100644
index 0000000000..a0edd23f02
--- /dev/null
+++ b/docs/guides/singlestore/volume-expansion/sdb volume-expansion opsrequest/index.md
@@ -0,0 +1,480 @@
+---
+title: SingleStore Volume Expansion
+menu:
+ docs_{{ .version }}:
+ identifier: guides-sdb-volume-expansion-volume-expansion
+ name: SingleStore Volume Expansion
+ parent: guides-sdb-volume-expansion
+ weight: 20
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# SingleStore Volume Expansion
+
+This guide will show you how to use `KubeDB` Ops-manager operator to expand the volume of a SingleStore.
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster.
+
+- You must have a `StorageClass` that supports volume expansion.
+
+- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md).
+
+- You should be familiar with the following `KubeDB` concepts:
+ - [SingleStore](/docs/guides/singlestore/concepts/singlestore.md)
+ - [SingleStoreOpsRequest](/docs/guides/singlestore/concepts/opsrequest.md)
+ - [Volume Expansion Overview](/docs/guides/singlestore/volume-expansion/overview)
+
+To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial.
+
+```bash
+$ kubectl create ns demo
+namespace/demo created
+```
+
+## Expand Volume of SingleStore
+
+Here, we are going to deploy a `SingleStore` cluster using a supported version by `KubeDB` operator. Then we are going to apply `SingleStoreOpsRequest` to expand its volume. The process of expanding SingleStore `standalone` is same as SingleStore cluster.
+
+### Create SingleStore License Secret
+
+We need SingleStore License to create SingleStore Database. So, Ensure that you have acquired a license and then simply pass the license by secret.
+
+```bash
+$ kubectl create secret generic -n demo license-secret \
+ --from-literal=username=license \
+ --from-literal=password='your-license-set-here'
+secret/license-secret created
+```
+
+### Prepare SingleStore Database
+
+At first verify that your cluster has a storage class, that supports volume expansion. Let's check,
+
+```bash
+$ kubectl get storageClass
+NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
+local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 6d2h
+longhorn (default) driver.longhorn.io Delete Immediate true 3d21h
+longhorn-static driver.longhorn.io Delete Immediate true 42m
+```
+
+Here, we will use `longhorn` storageClass for this tuitorial.
+
+Now, we are going to deploy a `SingleStore` database of 3 replicas with version `8.7.10`.
+
+### Deploy SingleStore
+
+In this section, we are going to deploy a SingleStore Cluster with 1GB volume for `aggregator` nodes and 10GB volume for `leaf` nodes. Then, in the next section we will expand its volume to 2GB using `SingleStoreOpsRequest` CRD. Below is the YAML of the `SingleStore` CR that we are going to create,
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sample-sdb
+ namespace: demo
+spec:
+ version: "8.7.10"
+ topology:
+ aggregator:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "600m"
+ requests:
+ memory: "2Gi"
+ cpu: "600m"
+ storage:
+ storageClassName: "longhorn"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ deletionPolicy: WipeOut
+```
+
+Let's create the `SingleStore` CR we have shown above,
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/volume-expansion/volume-expansion/example/sample-sdb.yaml
+singlestore.kubedb.com/sample-sdb created
+```
+
+Now, wait until `sample-sdb` has status `Ready`. i.e,
+
+```bash
+$ kubectl get sdb -n demo
+NAME TYPE VERSION STATUS AGE
+sample-sdb kubedb.com/v1alpha2 8.7.10 Ready 4m25s
+
+```
+
+Let's check volume size from petset, and from the persistent volume,
+
+```bash
+$ kubectl get petset -n demo sample-sdb-aggregator -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
+"1Gi"
+
+$ kubectl get petset -n demo sample-sdb-leaf -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
+"10Gi"
+
+$ kubectl get pv -n demo
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
+pvc-41cb892c-99fc-4211-a8c2-4e6f8a16c661 10Gi RWO Delete Bound demo/data-sample-sdb-leaf-0 longhorn 90s
+pvc-6e241724-6577-408e-b8de-9569d7d785c4 10Gi RWO Delete Bound demo/data-sample-sdb-leaf-1 longhorn 75s
+pvc-95ecc525-540b-4496-bf14-bfac901d73c4 1Gi RWO Delete Bound demo/data-sample-sdb-aggregator-0 longhorn 94s
+
+
+```
+
+You can see the `aggregator` petset has 1GB storage, and the capacity of all the `aggregator` persistent volumes are also 1GB.
+
+You can see the `leaf` petset has 10GB storage, and the capacity of all the `leaf` persistent volumes are also 10GB.
+
+We are now ready to apply the `SingleStoreOpsRequest` CR to expand the volume of this database.
+
+### Volume Expansion
+
+Here, we are going to expand the volume of the SingleStore cluster.
+
+#### Create SingleStoreOpsRequest
+
+In order to expand the volume of the database, we have to create a `SingleStoreOpsRequest` CR with our desired volume size. Below is the YAML of the `SingleStoreOpsRequest` CR that we are going to create,
+
+```yaml
+apiVersion: ops.kubedb.com/v1alpha1
+kind: SinglestoreOpsRequest
+metadata:
+ name: sdb-offline-vol-expansion
+ namespace: demo
+spec:
+ type: VolumeExpansion
+ databaseRef:
+ name: sample-sdb
+ volumeExpansion:
+ mode: "Offline"
+ aggregator: 2Gi
+ leaf: 11Gi
+```
+
+Here,
+
+- `spec.databaseRef.name` specifies that we are performing volume expansion operation on `sample-sdb` database.
+- `spec.type` specifies that we are performing `VolumeExpansion` on our database.
+- `spec.volumeExpansion.aggregator` and `spec.volumeExpansion.leaf` specifies the desired volume size for `aggregator` and `leaf` nodes.
+- `spec.volumeExpansion.mode` specifies the desired volume expansion mode (`Online` or `Offline`). Storageclass `longhorn` supports `Offline` volume expansion.
+
+> **Note:** If the Storageclass you are using doesn't support `Online` Volume Expansion, Try offline volume expansion by using `spec.volumeExpansion.mode:"Offline"`.
+
+Let's create the `SingleStoreOpsRequest` CR we have shown above,
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/volume-expansion/volume-expansion/example/sdb-offline-volume-expansion.yaml
+singlestoreopsrequest.ops.kubedb.com/sdb-offline-vol-expansion created
+```
+
+#### Verify SingleStore volume expanded successfully
+
+If everything goes well, `KubeDB` Ops-manager operator will update the volume size of `SingleStore` object and related `PetSets` and `Persistent Volumes`.
+
+Let's wait for `SingleStoreOpsRequest` to be `Successful`. Run the following command to watch `SingleStoreOpsRequest` CR,
+
+```bash
+$ kubectl get singlestoreopsrequest -n demo
+NAME TYPE STATUS AGE
+sdb-offline-vol-expansion VolumeExpansion Successful 13m
+```
+
+We can see from the above output that the `SingleStoreOpsRequest` has succeeded. If we describe the `SingleStoreOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database.
+
+```bash
+$ kubectl describe sdbops -n demo sdb-offline-vol-expansion
+Name: sdb-offline-vol-expansion
+Namespace: demo
+Labels:
+Annotations:
+API Version: ops.kubedb.com/v1alpha1
+Kind: SinglestoreOpsRequest
+Metadata:
+ Creation Timestamp: 2024-10-15T08:49:11Z
+ Generation: 1
+ Resource Version: 12476
+ UID: a0e2f1c3-a6b7-4993-a012-2823c3a2675b
+Spec:
+ Apply: IfReady
+ Database Ref:
+ Name: sample-sdb
+ Type: VolumeExpansion
+ Volume Expansion:
+ Aggregator: 2Gi
+ Leaf: 11Gi
+ Mode: Offline
+Status:
+ Conditions:
+ Last Transition Time: 2024-10-15T08:49:11Z
+ Message: Singlestore ops-request has started to expand volume of singlestore nodes.
+ Observed Generation: 1
+ Reason: VolumeExpansion
+ Status: True
+ Type: VolumeExpansion
+ Last Transition Time: 2024-10-15T08:49:17Z
+ Message: Successfully paused database
+ Observed Generation: 1
+ Reason: DatabasePauseSucceeded
+ Status: True
+ Type: DatabasePauseSucceeded
+ Last Transition Time: 2024-10-15T08:49:42Z
+ Message: successfully deleted the petSets with orphan propagation policy
+ Observed Generation: 1
+ Reason: OrphanPetSetPods
+ Status: True
+ Type: OrphanPetSetPods
+ Last Transition Time: 2024-10-15T08:49:22Z
+ Message: get pet set; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: GetPetSet
+ Last Transition Time: 2024-10-15T08:49:22Z
+ Message: delete pet set; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: DeletePetSet
+ Last Transition Time: 2024-10-15T08:51:07Z
+ Message: successfully updated Aggregator node PVC sizes
+ Observed Generation: 1
+ Reason: UpdateAggregatorNodePVCs
+ Status: True
+ Type: UpdateAggregatorNodePVCs
+ Last Transition Time: 2024-10-15T08:53:32Z
+ Message: get pod; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: GetPod
+ Last Transition Time: 2024-10-15T08:49:47Z
+ Message: is ops req patch; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: IsOpsReqPatch
+ Last Transition Time: 2024-10-15T08:49:47Z
+ Message: delete pod; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: DeletePod
+ Last Transition Time: 2024-10-15T08:50:22Z
+ Message: get pvc; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: GetPvc
+ Last Transition Time: 2024-10-15T08:50:22Z
+ Message: is pvc patch; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: IsPvcPatch
+ Last Transition Time: 2024-10-15T08:53:52Z
+ Message: compare storage; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: CompareStorage
+ Last Transition Time: 2024-10-15T08:50:42Z
+ Message: create pod; ConditionStatus:True
+ Observed Generation: 1
+ Status: True
+ Type: CreatePod
+ Last Transition Time: 2024-10-15T08:50:47Z
+ Message: is running single store; ConditionStatus:False
+ Observed Generation: 1
+ Status: False
+ Type: IsRunningSingleStore
+ Last Transition Time: 2024-10-15T08:54:32Z
+ Message: successfully updated Leaf node PVC sizes
+ Observed Generation: 1
+ Reason: UpdateLeafNodePVCs
+ Status: True
+ Type: UpdateLeafNodePVCs
+ Last Transition Time: 2024-10-15T08:54:43Z
+ Message: successfully reconciled the Singlestore resources
+ Observed Generation: 1
+ Reason: UpdatePetSets
+ Status: True
+ Type: UpdatePetSets
+ Last Transition Time: 2024-10-15T08:54:48Z
+ Message: PetSet is recreated
+ Observed Generation: 1
+ Reason: ReadyPetSets
+ Status: True
+ Type: ReadyPetSets
+ Last Transition Time: 2024-10-15T08:54:48Z
+ Message: Successfully completed volumeExpansion for Singlestore
+ Observed Generation: 1
+ Reason: Successful
+ Status: True
+ Type: Successful
+ Observed Generation: 1
+ Phase: Successful
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Starting 14m KubeDB Ops-manager Operator Start processing for SinglestoreOpsRequest: demo/sdb-offline-vol-expansion
+ Normal Starting 14m KubeDB Ops-manager Operator Pausing Singlestore database: demo/sample-sdb
+ Normal Successful 14m KubeDB Ops-manager Operator Successfully paused Singlestore database: demo/sample-sdb for SinglestoreOpsRequest: sdb-offline-vol-expansion
+ Warning get pet set; ConditionStatus:True 14m KubeDB Ops-manager Operator get pet set; ConditionStatus:True
+ Warning delete pet set; ConditionStatus:True 14m KubeDB Ops-manager Operator delete pet set; ConditionStatus:True
+ Warning get pet set; ConditionStatus:True 14m KubeDB Ops-manager Operator get pet set; ConditionStatus:True
+ Warning get pet set; ConditionStatus:True 14m KubeDB Ops-manager Operator get pet set; ConditionStatus:True
+ Warning delete pet set; ConditionStatus:True 14m KubeDB Ops-manager Operator delete pet set; ConditionStatus:True
+ Warning get pet set; ConditionStatus:True 14m KubeDB Ops-manager Operator get pet set; ConditionStatus:True
+ Normal OrphanPetSetPods 13m KubeDB Ops-manager Operator successfully deleted the petSets with orphan propagation policy
+ Warning get pod; ConditionStatus:True 13m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning is ops req patch; ConditionStatus:True 13m KubeDB Ops-manager Operator is ops req patch; ConditionStatus:True
+ Warning delete pod; ConditionStatus:True 13m KubeDB Ops-manager Operator delete pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:False 13m KubeDB Ops-manager Operator get pod; ConditionStatus:False
+ Warning get pod; ConditionStatus:True 13m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 13m KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning is pvc patch; ConditionStatus:True 13m KubeDB Ops-manager Operator is pvc patch; ConditionStatus:True
+ Warning compare storage; ConditionStatus:False 13m KubeDB Ops-manager Operator compare storage; ConditionStatus:False
+ Warning get pod; ConditionStatus:True 13m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 13m KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 13m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 13m KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 13m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 13m KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 12m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 12m KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning compare storage; ConditionStatus:True 12m KubeDB Ops-manager Operator compare storage; ConditionStatus:True
+ Warning create pod; ConditionStatus:True 12m KubeDB Ops-manager Operator create pod; ConditionStatus:True
+ Warning is ops req patch; ConditionStatus:True 12m KubeDB Ops-manager Operator is ops req patch; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 12m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning is running single store; ConditionStatus:False 12m KubeDB Ops-manager Operator is running single store; ConditionStatus:False
+ Warning get pod; ConditionStatus:True 12m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 12m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 12m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Normal UpdateAggregatorNodePVCs 12m KubeDB Ops-manager Operator successfully updated Aggregator node PVC sizes
+ Warning get pod; ConditionStatus:True 12m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning is ops req patch; ConditionStatus:True 12m KubeDB Ops-manager Operator is ops req patch; ConditionStatus:True
+ Warning delete pod; ConditionStatus:True 12m KubeDB Ops-manager Operator delete pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:False 12m KubeDB Ops-manager Operator get pod; ConditionStatus:False
+ Warning get pod; ConditionStatus:True 11m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 11m KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning is pvc patch; ConditionStatus:True 11m KubeDB Ops-manager Operator is pvc patch; ConditionStatus:True
+ Warning compare storage; ConditionStatus:False 11m KubeDB Ops-manager Operator compare storage; ConditionStatus:False
+ Warning get pod; ConditionStatus:True 11m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 11m KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 11m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 11m KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 11m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 11m KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 11m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 11m KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 11m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 11m KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 11m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 11m KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 11m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 11m KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 11m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 11m KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning compare storage; ConditionStatus:True 11m KubeDB Ops-manager Operator compare storage; ConditionStatus:True
+ Warning create pod; ConditionStatus:True 11m KubeDB Ops-manager Operator create pod; ConditionStatus:True
+ Warning is ops req patch; ConditionStatus:True 11m KubeDB Ops-manager Operator is ops req patch; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 11m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning is running single store; ConditionStatus:False 11m KubeDB Ops-manager Operator is running single store; ConditionStatus:False
+ Warning get pod; ConditionStatus:True 11m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 10m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 10m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 10m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 10m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning is ops req patch; ConditionStatus:True 10m KubeDB Ops-manager Operator is ops req patch; ConditionStatus:True
+ Warning delete pod; ConditionStatus:True 10m KubeDB Ops-manager Operator delete pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:False 10m KubeDB Ops-manager Operator get pod; ConditionStatus:False
+ Warning get pod; ConditionStatus:True 10m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 10m KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning is pvc patch; ConditionStatus:True 10m KubeDB Ops-manager Operator is pvc patch; ConditionStatus:True
+ Warning compare storage; ConditionStatus:False 10m KubeDB Ops-manager Operator compare storage; ConditionStatus:False
+ Warning get pod; ConditionStatus:True 10m KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 10m KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 9m55s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 9m55s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 9m50s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 9m50s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 9m45s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pvc; ConditionStatus:True 9m45s KubeDB Ops-manager Operator get pvc; ConditionStatus:True
+ Warning compare storage; ConditionStatus:True 9m45s KubeDB Ops-manager Operator compare storage; ConditionStatus:True
+ Warning create pod; ConditionStatus:True 9m45s KubeDB Ops-manager Operator create pod; ConditionStatus:True
+ Warning is ops req patch; ConditionStatus:True 9m45s KubeDB Ops-manager Operator is ops req patch; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 9m40s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 9m35s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 9m30s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 9m25s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 9m20s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 9m15s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Warning get pod; ConditionStatus:True 9m10s KubeDB Ops-manager Operator get pod; ConditionStatus:True
+ Normal UpdateLeafNodePVCs 9m5s KubeDB Ops-manager Operator successfully updated Leaf node PVC sizes
+ Normal UpdatePetSets 8m54s KubeDB Ops-manager Operator successfully reconciled the Singlestore resources
+ Warning get pet set; ConditionStatus:True 8m49s KubeDB Ops-manager Operator get pet set; ConditionStatus:True
+ Warning get pet set; ConditionStatus:True 8m49s KubeDB Ops-manager Operator get pet set; ConditionStatus:True
+ Normal ReadyPetSets 8m49s KubeDB Ops-manager Operator PetSet is recreated
+ Normal Starting 8m49s KubeDB Ops-manager Operator Resuming Singlestore database: demo/sample-sdb
+ Normal Successful 8m49s KubeDB Ops-manager Operator Successfully resumed Singlestore database: demo/sample-sdb for SinglestoreOpsRequest: sdb-offline-vol-expansion
+
+
+```
+
+Now, we are going to verify from the `Petset`, and the `Persistent Volumes` whether the volume of the database has expanded to meet the desired state, Let's check,
+
+```bash
+$ kubectl get petset -n demo sample-sdb-aggregator -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
+"2Gi"
+$ kubectl get petset -n demo sample-sdb-leaf -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage'
+"11Gi"
+
+
+$ kubectl get pv -n demo
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
+pvc-0a4b35e6-988e-4088-ae41-852ad82c5800 2Gi RWO Delete Bound demo/data-sample-sdb-aggregator-0 longhorn 22m
+pvc-f6df5743-2bb1-4705-a2f7-be6cf7cdd7f1 11Gi RWO Delete Bound demo/data-sample-sdb-leaf-0 longhorn 22m
+pvc-f8fee59d-74dc-46ac-9973-ff1701a6837b 11Gi RWO Delete Bound demo/data-sample-sdb-leaf-1 longhorn 19m
+```
+
+The above output verifies that we have successfully expanded the volume of the SingleStore database.
+
+## Cleaning Up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+```bash
+$ kubectl delete sdb -n demo sample-sdb
+$ kubectl delete singlestoreopsrequest -n demo sdb-offline-volume-expansion
+```
diff --git a/docs/guides/zookeeper/README.md b/docs/guides/zookeeper/README.md
index 8b2ad1a940..16d087f674 100644
--- a/docs/guides/zookeeper/README.md
+++ b/docs/guides/zookeeper/README.md
@@ -16,25 +16,25 @@ aliases:
> New to KubeDB? Please start [here](/docs/README.md).
## Supported ZooKeeper Features
-| Features | Availability |
-|---------------------------------------------------------------------------|:------------:|
-| Ensemble | ✓ |
-| Standalone | ✓ |
-| Authentication & Autorization | ✓ |
-| Custom Configuration | ✓ |
-| Grafana Dashboards | ✓ |
-| Externally manageable Auth Secret | ✓ |
-| Reconfigurable Health Checker | ✓ |
+| Features | Availability |
+|------------------------------------------------------------------------------------|:------------:|
+| Ensemble | ✓ |
+| Standalone | ✓ |
+| Authentication & Autorization | ✓ |
+| Custom Configuration | ✓ |
+| Grafana Dashboards | ✓ |
+| Externally manageable Auth Secret | ✓ |
+| Reconfigurable Health Checker | ✓ |
| TLS: Add, Remove, Update, Rotate ( [Cert Manager](https://cert-manager.io/docs/) ) | ✓ |
-| Automated Version update | ✓ |
-| Automatic Vertical Scaling | ✓ |
-| Automated Horizontal Scaling | ✓ |
-| Automated Volume Expansion | ✓ |
-| Backup/Recovery: Instant, Scheduled ([KubeStash](https://kubestash.com/)) | ✓ |
-| Persistent Volume | ✓ |
-| Initializing from Snapshot ( [Stash](https://stash.run/) ) | ✓ |
-| Builtin Prometheus Discovery | ✓ |
-| Using Prometheus operator | ✓ |
+| Automated Version update | ✓ |
+| Automatic Vertical Scaling | ✓ |
+| Automated Horizontal Scaling | ✓ |
+| Automated Volume Expansion | ✓ |
+| Backup/Recovery: Instant, Scheduled ([KubeStash](https://kubestash.com/)) | ✓ |
+| Persistent Volume | ✓ |
+| Initializing from Snapshot ( [Stash](https://stash.run/) ) | ✓ |
+| Builtin Prometheus Discovery | ✓ |
+| Using Prometheus operator | ✓ |
## Life Cycle of a ZooKeeper Object
diff --git a/docs/images/day-2-operation/mssqlserver/ms-horizontal-scaling.png b/docs/images/day-2-operation/mssqlserver/ms-horizontal-scaling.png
new file mode 100644
index 0000000000..ffd21f300c
Binary files /dev/null and b/docs/images/day-2-operation/mssqlserver/ms-horizontal-scaling.png differ
diff --git a/docs/images/day-2-operation/mssqlserver/ms-tls.png b/docs/images/day-2-operation/mssqlserver/ms-tls.png
new file mode 100644
index 0000000000..3f0abe1c6a
Binary files /dev/null and b/docs/images/day-2-operation/mssqlserver/ms-tls.png differ
diff --git a/docs/images/day-2-operation/mssqlserver/ms-tls.svg b/docs/images/day-2-operation/mssqlserver/ms-tls.svg
deleted file mode 100644
index 819643f928..0000000000
--- a/docs/images/day-2-operation/mssqlserver/ms-tls.svg
+++ /dev/null
@@ -1,4 +0,0 @@
-
-
-
-