Skip to content

Commit

Permalink
Merge pull request k0sproject#3651 from juanluisvaladas/open-ebs
Browse files Browse the repository at this point in the history
Make OpenEBS as a helm extension
  • Loading branch information
juanluisvaladas authored Jan 23, 2024
2 parents 2a8c296 + 6aa4616 commit 7e7fbcb
Show file tree
Hide file tree
Showing 22 changed files with 552 additions and 62 deletions.
12 changes: 7 additions & 5 deletions cmd/controller/controller.go
Original file line number Diff line number Diff line change
Expand Up @@ -263,11 +263,13 @@ func (c *command) start(ctx context.Context) error {
}
nodeComponents.Add(ctx, leaderElector)

nodeComponents.Add(ctx, &applier.Manager{
K0sVars: c.K0sVars,
KubeClientFactory: adminClientFactory,
LeaderElector: leaderElector,
})
if !slices.Contains(c.DisableComponents, constant.ApplierManagerComponentName) {
nodeComponents.Add(ctx, &applier.Manager{
K0sVars: c.K0sVars,
KubeClientFactory: adminClientFactory,
LeaderElector: leaderElector,
})
}

if !c.SingleNode && !slices.Contains(c.DisableComponents, constant.ControlAPIComponentName) {
nodeComponents.Add(ctx, &controller.K0SControlAPI{
Expand Down
2 changes: 1 addition & 1 deletion docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Before that mishap we had 4776 stargazers, making k0s one of the most popular Ku
- Scalable from a single node to large, [high-available](high-availability.md) clusters
- Supports custom [Container Network Interface (CNI)](networking.md) plugins (Kube-Router is the default, Calico is offered as a preconfigured alternative)
- Supports custom [Container Runtime Interface (CRI)](runtime.md) plugins (containerd is the default)
- Supports all Kubernetes storage options with [Container Storage Interface (CSI)](storage.md), includes [OpenEBS host-local storage provider](storage.md#bundled-openebs-storage)
- Supports all Kubernetes storage options with [Container Storage Interface (CSI)](storage.md), includes [OpenEBS host-local storage provider](examples/openebs.md)
- Supports a variety of [datastore backends](configuration.md#specstorage): etcd (default for multi-node clusters), SQLite (default for single node clusters), MySQL, and PostgreSQL
- Supports x86-64, ARM64 and ARMv7
- Includes [Konnectivity service](networking.md#controller-worker-communication), CoreDNS and Metrics Server
Expand Down
4 changes: 2 additions & 2 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -486,7 +486,7 @@ In the runtime the image names are calculated as `my.own.repo/calico/kube-contro
`spec.extensions.storage` controls bundled storage provider.
The default value `external` makes no storage deployed.
To enable [embedded host-local storage provider](storage.md#bundled-openebs-storage) use the following configuration:
To enable [embedded host-local storage provider](examples/openebs.md) use the following configuration:
```yaml
spec:
Expand Down Expand Up @@ -522,7 +522,7 @@ they need to fulfill their need for the control plane. Disabling the system
components happens through a command line flag for the controller process:
```text
--disable-components strings disable components (valid items: autopilot,control-api,coredns,csr-approver,endpoint-reconciler,helm,konnectivity-server,kube-controller-manager,kube-proxy,kube-scheduler,metrics-server,network-provider,node-role,system-rbac,worker-config)
--disable-components strings disable components (valid items: applier-manager,autopilot,control-api,coredns,csr-approver,endpoint-reconciler,helm,konnectivity-server,kube-controller-manager,kube-proxy,kube-scheduler,metrics-server,network-provider,node-role,system-rbac,worker-config)
```
**Note:** As of k0s 1.26, the kubelet-config component has been replaced by the
Expand Down
184 changes: 184 additions & 0 deletions docs/examples/openebs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,184 @@
# OpenEBS

This tutorial covers the installation of OpenEBS as a Helm extension, both from
scratch and how to migrate it from a storage extension.

## Installing OpenEBS from scratch

**WARNING**: Do not configure OpenEBS as both a storage extension and a Helm
extension. It's considered an invalid configuration and k0s will entirely ignore
the configuration to prevent accidental upgrades or downgrades. The chart
objects defined in the API will still behave normally.

OpenEBS can be installed as a helm chart by adding it as an extension to your configuration:

```yaml
extensions:
helm:
repositories:
- name: openebs-internal
url: https://openebs.github.io/charts
charts:
- name: openebs
chartname: openebs-internal/openebs
version: "3.9.0"
namespace: openebs
order: 1
values: |
localprovisioner:
hostpathClass:
enabled: true
isDefaultClass: false
```
If you want OpenEBS to be your default storage class, set `isDefaultClass` to `true`.

## Migrating bundled OpenEBS to helm extension

The bundled OpenEBS extension is already a helm extension installed as a
`chart.helm.k0sproject.io`. For this reason, all we have to do is to remove the
manifests and to clean up the object. However, this must be done in a specific order
to prevent data loss.

**WARNING**: Not following the steps in the precise order presented by the
documentation may provoke data loss.

The first step to perform the migration is to disable the `applier-manager`
component on all controllers. For each controller, restart the controller
with the flag `--disable-components=applier-manager`. If you already had this flag,
set it to `--disable-components=<previous value>,applier-manager`.

Once the `applier-manager` is disabled in every running controller, you need to modify
the configuration to use `external_storage` instead of `openebs_local_storage`.

If you are using [dynamic configuration](../dynamic-configuration.md), you can
change it with this command:

```shell
kubectl patch clusterconfig -n kube-system k0s --patch '{"spec":{"extensions":{"storage":{"type":"external_storage"}}}}' --type=merge
```

If you are using a static configuration file, replace `spec.extensions.storage.type`
from `openebs_local_storage` to `external_storage` in all control plane nodes and
restart all the control plane nodes one by one.

When the configuration is set to `external_storage` and the servers are
restarted, you must manage the it as a chart object in the API:

```shell
kubectl get chart -n kube-system k0s-addon-chart-openebs -o yaml
```

First, remove the labels and annotations related to the stack applier:

```shell
k0s kc annotate -n kube-system chart k0s-addon-chart-openebs k0s.k0sproject.io/stack-checksum-
k0s kc label -n kube-system chart k0s-addon-chart-openebs k0s.k0sproject.io/stack-
```

After the annotations and labels are removed, remove the manifest file **on each
controller**. This file is located in
`<k0s-data-dir>/manifests/helm/<number>_helm_extension_openebs.yaml`, which in
most installations defaults to
`/var/lib/k0s/manifests/helm/0_helm_extension_openebs.yaml`.

**WARNING**: Not removing the old manifest file from all controllers may cause
the manifest to be reapplied, reverting your changes and potentially casuing
data loss.

Finally, we want to re-enable the `applier-manager` and restart all controllers
without the `--disable-components=applier-manager` flag.

Once the migration is coplete, you'll be able to update the OpenEBS chart.
Let's take v3.9.0 as an example:

```shell
kubectl patch chart -n kube-system k0s-addon-chart-openebs --patch '{"spec":{"version":"3.9.0"}}' --type=merge
```

## Usage

Once installed, the cluster will have two storage classes available for you to use:

```shell
k0s kubectl get storageclass
```

```shell
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
openebs-device openebs.io/local Delete WaitForFirstConsumer false 24s
openebs-hostpath openebs.io/local Delete WaitForFirstConsumer false 24s
```

The `openebs-hostpath` is the storage class that maps to `/var/openebs/local`.

The `openebs-device` is not configured and could be configured by [manifest deployer](../manifests.md) accordingly to the [OpenEBS documentation](https://docs.openebs.io/)

### Example

Use following manifests as an example of pod with mounted volume:

```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
storageClassName: openebs-hostpath
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: default
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
strategy:
type: Recreate
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: persistent-storage
mountPath: /var/lib/nginx
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: nginx-pvc
```

```shell
k0s kubectl apply -f nginx.yaml
```

```shell
persistentvolumeclaim/nginx-pvc created
deployment.apps/nginx created
bash-5.1# k0s kc get pods
NAME READY STATUS RESTARTS AGE
nginx-d95bcb7db-gzsdt 1/1 Running 0 30s
```

```shell
k0s kubectl get pv
```

```shell
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-9a7fae2d-eb03-42c3-aaa9-1a807d5df12f 5Gi RWO Delete Bound default/nginx-pvc openebs-hostpath 30s
```
69 changes: 37 additions & 32 deletions docs/storage.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,46 @@
# Storage

k0s supports any volume provider that implements the [CSI specification](https://github.com/container-storage-interface/spec). For convenience, k0s comes bundled in with support for [OpenEBS local path provisioner](https://openebs.io/docs/concepts/localpv).
## CSI

k0s supports a wide range of different storage options by utilizing Container Storage Interface (CSI). All Kubernetes storage solutions are supported and users can easily select the storage that fits best for their needs.

When the storage solution implements CSI, kubernetes can communicate with the storage to create and configure persistent volumes. This makes it easy to dynamically provision the requested volumes. It also expands the supported storage solutions from the previous generation, in-tree volume plugins. More information about the CSI concept is described on the [Kubernetes Blog](https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/).

![k0s storage](img/k0s_storage.png)

### Installing 3rd party storage solutions

Follow your storage driver's installation instructions. Note that by default the Kubelet installed by k0s uses a slightly different path for its working directory (`/varlib/k0s/kubelet` instead of `/var/lib/kubelet`). Consult the CSI driver's configuration documentation on how to customize this path. The actual path can differ if you defined the flag `--data-dir`.

## Example storage solutions

Different Kubernetes storage solutions are explained in the [official Kubernetes storage documentation](https://kubernetes.io/docs/concepts/storage/volumes/). All of them can be used with k0s. Here are some popular ones:

- Rook-Ceph (Open Source)
- MinIO (Open Source)
- Gluster (Open Source)
- Longhorn (Open Source)
- Amazon EBS
- Google Persistent Disk
- Azure Disk
- Portworx

If you are looking for a fault-tolerant storage with data replication, you can find a k0s tutorial for configuring Ceph storage with Rook [in here](examples/rook-ceph.md).

## Bundled OpenEBS storage (deprecated)

Bundled OpenEBS was deprecated in favor of running it [as a helm extension](./examples/openebs.md),
this documentation is maintained as a reference for existing installations.

The choise of which CSI provider to use depends heavily on the use case and infrastructure you're running on and the use case you have.
This was done for three reasons:

## Bundled OpenEBS storage
1. By installing it as a helm extension, users have more control and flexibility without adding complexity.
2. It allows users to choose the OpenEBS version independent of their k0s version.
3. It makes the k0s configuration more consistent.

K0s comes out with bundled OpenEBS installation which can be enabled by using [configuration file](./configuration.md)
For new installations or to migrate existing installations, please refer to the [OpenEBS extension page](./examples/openebs.md).

Use following configuration as an example:
The OpenEBS extension is enabled by setting [`spec.extensions.storage.type`](configuration.md#specextensionsstorage) to``openebs_local_storage`:

```yaml
spec:
Expand Down Expand Up @@ -101,30 +133,3 @@ k0s kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-9a7fae2d-eb03-42c3-aaa9-1a807d5df12f 5Gi RWO Delete Bound default/nginx-pvc openebs-hostpath 30s
```

## CSI

k0s supports a wide range of different storage options by utilizing Container Storage Interface (CSI). All Kubernetes storage solutions are supported and users can easily select the storage that fits best for their needs.

When the storage solution implements Container Storage Interface (CSI), containers can communicate with the storage for creation and configuration of persistent volumes. This makes it easy to dynamically provision the requested volumes. It also expands the supported storage solutions from the previous generation, in-tree volume plugins. More information about the CSI concept is described on the [Kubernetes Blog](https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/).

![k0s storage](img/k0s_storage.png)

### Installing 3rd party storage solutions

Follow your storage driver's installation instructions. Note that the Kubelet installed by k0s uses a slightly different path for its working directory (`/varlib/k0s/kubelet` instead of `/var/lib/kubelet`). Consult the CSI driver's configuration documentation on how to customize this path.

## Example storage solutions

Different Kubernetes storage solutions are explained in the [official Kubernetes storage documentation](https://kubernetes.io/docs/concepts/storage/volumes/). All of them can be used with k0s. Here are some popular ones:

- Rook-Ceph (Open Source)
- MinIO (Open Source)
- Gluster (Open Source)
- Longhorn (Open Source)
- Amazon EBS
- Google Persistent Disk
- Azure Disk
- Portworx

If you are looking for a fault-tolerant storage with data replication, you can find a k0s tutorial for configuring Ceph storage with Rook [in here](examples/rook-ceph.md).
2 changes: 2 additions & 0 deletions inttest/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -111,6 +111,8 @@ check-network-conformance-calico: TEST_PACKAGE=network-conformance

check-nllb: TIMEOUT=15m

check-openebs: TIMEOUT=7m

.PHONY: $(smoketests)
include Makefile.variables

Expand Down
1 change: 1 addition & 0 deletions inttest/Makefile.variables
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,7 @@ smoketests := \
check-noderole \
check-noderole-no-taints \
check-noderole-single \
check-openebs\
check-psp \
check-singlenode \
check-statussocket \
Expand Down
2 changes: 1 addition & 1 deletion inttest/calico/calico_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ func (s *CalicoSuite) TestK0sGetsUp() {
s.AssertSomeKubeSystemPods(kc)

s.T().Log("waiting to see calico pods ready")
s.NoError(common.WaitForDaemonSet(s.Context(), kc, "calico-node"), "calico did not start")
s.NoError(common.WaitForDaemonSet(s.Context(), kc, "calico-node", "kube-system"), "calico did not start")
s.NoError(common.WaitForPodLogs(s.Context(), kc, "kube-system"))

createdTargetPod, err := kc.CoreV1().Pods("default").Create(s.Context(), &corev1.Pod{
Expand Down
2 changes: 1 addition & 1 deletion inttest/cli/cli_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ func (s *CliSuite) TestK0sCliKubectlAndResetCommand() {
s.AssertSomeKubeSystemPods(kc)

// Wait till we see all pods running, otherwise we get into weird timing issues and high probability of leaked containerd shim processes
require.NoError(common.WaitForDaemonSet(s.Context(), kc, "kube-proxy"))
require.NoError(common.WaitForDaemonSet(s.Context(), kc, "kube-proxy", "kube-system"))
require.NoError(common.WaitForKubeRouterReady(s.Context(), kc))
require.NoError(common.WaitForDeployment(s.Context(), kc, "coredns", "kube-system"))

Expand Down
8 changes: 4 additions & 4 deletions inttest/common/util.go
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ func Poll(ctx context.Context, condition wait.ConditionWithContextFunc) error {
// WaitForKubeRouterReady waits to see all kube-router pods healthy as long as
// the context isn't canceled.
func WaitForKubeRouterReady(ctx context.Context, kc *kubernetes.Clientset) error {
return WaitForDaemonSet(ctx, kc, "kube-router")
return WaitForDaemonSet(ctx, kc, "kube-router", "kube-system")
}

// WaitForCoreDNSReady waits to see all coredns pods healthy as long as the context isn't canceled.
Expand Down Expand Up @@ -146,10 +146,10 @@ func WaitForNodeReadyStatus(ctx context.Context, clients kubernetes.Interface, n
})
}

// WaitForDaemonset waits for the DaemonlSet with the given name to have
// WaitForDaemonSet waits for the DaemonlSet with the given name to have
// as many ready replicas as defined in the spec.
func WaitForDaemonSet(ctx context.Context, kc *kubernetes.Clientset, name string) error {
return watch.DaemonSets(kc.AppsV1().DaemonSets("kube-system")).
func WaitForDaemonSet(ctx context.Context, kc *kubernetes.Clientset, name string, namespace string) error {
return watch.DaemonSets(kc.AppsV1().DaemonSets(namespace)).
WithObjectName(name).
WithErrorCallback(RetryWatchErrors(logfFrom(ctx))).
Until(ctx, func(ds *appsv1.DaemonSet) (bool, error) {
Expand Down
2 changes: 1 addition & 1 deletion inttest/customports/customports_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ func (s *customPortsSuite) TestControllerJoinsWithCustomPort() {
s.T().Log("waiting to see CNI pods ready")
s.Require().NoError(common.WaitForKubeRouterReady(s.Context(), kc), "calico did not start")
s.T().Log("waiting to see konnectivity-agent pods ready")
s.Require().NoError(common.WaitForDaemonSet(s.Context(), kc, "konnectivity-agent"), "konnectivity-agent did not start")
s.Require().NoError(common.WaitForDaemonSet(s.Context(), kc, "konnectivity-agent", "kube-system"), "konnectivity-agent did not start")

s.T().Log("waiting to get logs from pods")
s.Require().NoError(common.WaitForPodLogs(s.Context(), kc, "kube-system"))
Expand Down
2 changes: 1 addition & 1 deletion inttest/network-conformance/network_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ func (s *networkSuite) TestK0sGetsUp() {
daemonSetName = "kube-router"
}
s.T().Log("waiting to see CNI pods ready")
s.NoError(common.WaitForDaemonSet(s.Context(), kc, daemonSetName), fmt.Sprintf("%s did not start", daemonSetName))
s.NoError(common.WaitForDaemonSet(s.Context(), kc, daemonSetName, "kube-system"), fmt.Sprintf("%s did not start", daemonSetName))

restConfig, err := s.GetKubeConfig("controller0")
s.Require().NoError(err)
Expand Down
Loading

0 comments on commit 7e7fbcb

Please sign in to comment.