Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update dependency Rook to v1.12.7 #115

Merged
merged 5 commits into from
Nov 13, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .cruft.json
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
{
"template": "https://github.com/projectsyn/commodore-component-template.git",
"commit": "d8afca0d957d69b362c2cb45e3f6faa13662dfe2",
"commit": "6559a10aa1b226aa978e2ce593e115c3db984a6c",
"checkout": "main",
"context": {
"cookiecutter": {
"name": "Rook Ceph",
"slug": "rook-ceph",
"parameter_key": "rook_ceph",
"test_cases": "defaults openshift4",
"test_cases": "defaults openshift4 cephfs",
"add_lib": "n",
"add_pp": "y",
"add_golden": "y",
Expand Down
2 changes: 2 additions & 0 deletions .github/workflows/test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ jobs:
instance:
- defaults
- openshift4
- cephfs
defaults:
run:
working-directory: ${{ env.COMPONENT_NAME }}
Expand All @@ -50,6 +51,7 @@ jobs:
instance:
- defaults
- openshift4
- cephfs
defaults:
run:
working-directory: ${{ env.COMPONENT_NAME }}
Expand Down
2 changes: 1 addition & 1 deletion Makefile.vars.mk
Original file line number Diff line number Diff line change
Expand Up @@ -57,4 +57,4 @@ KUBENT_IMAGE ?= ghcr.io/doitintl/kube-no-trouble:latest
KUBENT_DOCKER ?= $(DOCKER_CMD) $(DOCKER_ARGS) $(root_volume) --entrypoint=/app/kubent $(KUBENT_IMAGE)

instance ?= defaults
test_instances = tests/defaults.yml tests/openshift4.yml
test_instances = tests/defaults.yml tests/openshift4.yml tests/cephfs.yml
7 changes: 3 additions & 4 deletions class/defaults.yml
Original file line number Diff line number Diff line change
Expand Up @@ -112,8 +112,7 @@ parameters:
# extended here
mirroring:
enabled: false
mount_options:
discard: true
mount_options: {}
storage_class_config:
allowVolumeExpansion: true

Expand Down Expand Up @@ -225,7 +224,7 @@ parameters:
rook:
registry: docker.io
image: rook/ceph
tag: v1.11.11
tag: v1.12.7
ceph:
registry: quay.io
image: ceph/ceph
Expand All @@ -241,7 +240,7 @@ parameters:

charts:
# We do not support helm chart versions older than v1.7.0
rook-ceph: v1.11.11
rook-ceph: v1.12.7

operator_helm_values:
image:
Expand Down
4 changes: 4 additions & 0 deletions class/rook-ceph.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,10 @@ parameters:
output_path: ${_base_directory}/manifests/${rook_ceph:images:rook:tag}/toolbox.yaml

compile:
- input_type: remove
input_paths:
- rook-ceph/helmcharts/rook-ceph/${rook_ceph:charts:rook-ceph}/templates/securityContextConstraints.yaml
output_path: .
- input_paths:
- rook-ceph/component/app.jsonnet
input_type: jsonnet
Expand Down
9 changes: 7 additions & 2 deletions component/alertrules.libsonnet
Original file line number Diff line number Diff line change
Expand Up @@ -141,10 +141,15 @@ local ignore_groups = std.set([
local add_runbook_url = {
rules: [
if std.objectHas(r, 'alert') then
r {
local a =
if r.alert == 'CephPGUnavilableBlockingIO' then
r { alert: 'CephPGUnavailableBlockingIO' }
else
r;
a {
annotations+: {
[if !std.objectHas(r.annotations, 'runbook_url') then 'runbook_url']:
runbook(r.alert),
runbook(a.alert),
},
}
else
Expand Down
7 changes: 4 additions & 3 deletions docs/modules/ROOT/pages/references/parameters.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -323,8 +323,7 @@ fspool:
# extended here
mirroring:
enabled: false
mount_options:
discard: true
mount_options: {}
storage_class_config:
allowVolumeExpansion: true
----
Expand All @@ -334,9 +333,11 @@ This configuration creates
* One `CephFilesystem` resource named `fspool`.
This CephFS instance is configured to have 3 replicas both for the metadata pool and its single data pool.
By default, the CephFS instance is configured to assume that metadata will consume roughly 20% and data roughly 80% of the storage cluster.
* A storage class which creates PVs on the CephFS instance, supports volume expansion and configures PVs to be mounted with `-o discard`.
* A storage class which creates PVs on the CephFS instance and supports volume expansion.
* A `VolumeSnapshotClass` associated with the storage class

NOTE: CephFS doesn't require mount option `discard`, and ceph-csi v3.9.0+ will fail to mount any CephFS volumes if the storage class is configured with mount option `discard`.

The key `data_pools` is provided to avoid having to manage a list of data pools directly in the hierarchy.
The values of each key in `data_pools` are placed in the resulting CephFS resource's field `.spec.dataPools`

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ include::partial$runbooks/contribution_note.adoc[]

== icon:glasses[] Overview

The filesystem's `max_mds` setting defined the number of MDS ranks in the filesystem.
The filesystem's `max_mds` setting defines the number of MDS ranks in the filesystem.
The current number of active MDS daemons is less than this setting.

== icon:bug[] Steps for debugging
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ include::partial$runbooks/contribution_note.adoc[]

== icon:glasses[] Overview

This alert is triggered when the disk space used by a Storage Node will be full in less than 5 days assuming the average fill-up rate of the past 48 hours.
This alert is triggered when the disk space used by a storage node will be full in less than 5 days assuming the average fill-up rate of the past 48 hours.
You should increase the space available to the node.
The default location for the store sits under /var/lib/rook/ as a `hostPath` volume.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ include::partial$runbooks/contribution_note.adoc[]

== icon:glasses[] Overview

Node has a different MTU size than the median value on device.
At least one network device on the node has a different MTU size than the median value for that device across all storage nodes.

== icon:bug[] Steps for debugging

Expand Down
12 changes: 12 additions & 0 deletions docs/modules/ROOT/pages/runbooks/CephNodeNetworkBondDegraded.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
= Alert rule: CephNodeNetworkBondDegraded

include::partial$runbooks/contribution_note.adoc[]

== icon:glasses[] Overview

A bonded network device is degraded on the node.

== icon:bug[] Steps for debugging

// Add detailed steps to debug and resolve the issue

9 changes: 7 additions & 2 deletions docs/modules/ROOT/pages/runbooks/CephOSDFull.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,13 @@ include::partial$runbooks/contribution_note.adoc[]

== icon:glasses[] Overview

This alert fires when the Ceph cluster utilization is higher than 85% of the cluster capacity, and the cluster is in read-only mode.
To resolve this alert, unused data should be deleted or the cluster size must be increased.
This alert fires when utilization of a Ceph storage device (disk) is higher than 85% of the device's capacity.
Most likely, the Ceph cluster is in read-only mode when this alert fires.

This alert may indicate that the cluster utilization has reached problematic levels.
If this alert is triggered by high cluster utilization, unused data should be deleted or the cluster size must be increased.

Otherwise, investigate why this particular device has higher utilization than the other storage devices in the Ceph cluster.

== icon:bug[] Steps for debugging

Expand Down
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
= Alert rule: CephPGUnavailableBlockingIO
:page-aliases: runbooks/CephPGUnavilableBlockingIO.adoc

include::partial$runbooks/contribution_note.adoc[]

== icon:glasses[] Overview

Data availability is reduced impacting the clusters ability to service I/O to some data.
Data availability is reduced impacting the clusters ability to service I/O.
One or more placement groups (PGs) are in a state that blocks IO.

== icon:bug[] Steps for debugging
Expand Down
1 change: 1 addition & 0 deletions docs/modules/ROOT/partials/nav.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
** xref:runbooks/CephHealthWarning.adoc[CephHealthWarning]
** xref:runbooks/CephNodeDiskspaceWarning.adoc[CephNodeDiskspaceWarning]
** xref:runbooks/CephNodeInconsistentMTU.adoc[CephNodeInconsistentMTU]
** xref:runbooks/CephNodeNetworkBondDegraded.adoc[CephNodeNetworkBondDegraded]
** xref:runbooks/CephNodeNetworkPacketDrops.adoc[CephNodeNetworkPacketDrops]
** xref:runbooks/CephNodeNetworkPacketErrors.adoc[CephNodeNetworkPacketErrors]
** xref:runbooks/CephNodeRootFilesystemFull.adoc[CephNodeRootFilesystemFull]
Expand Down
23 changes: 23 additions & 0 deletions tests/cephfs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
applications:
- rancher-monitoring

parameters:
kapitan:
dependencies:
- type: https
source: https://raw.githubusercontent.com/projectsyn/component-storageclass/v1.0.0/lib/storageclass.libsonnet
output_path: vendor/lib/storageclass.libsonnet

storageclass:
defaults: {}
defaultClass: ""

rook_ceph:
ceph_cluster:
rbd_enabled: false
cephfs_enabled: true

rancher_monitoring:
alerts:
ignoreNames: []
customAnnotations: {}
Empty file.
21 changes: 21 additions & 0 deletions tests/golden/cephfs/rook-ceph/rook-ceph/00_namespaces.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
apiVersion: v1
kind: Namespace
metadata:
annotations: {}
labels:
app.kubernetes.io/component: rook-ceph
app.kubernetes.io/managed-by: commodore
app.kubernetes.io/name: syn-rook-ceph-operator
name: syn-rook-ceph-operator
name: syn-rook-ceph-operator
---
apiVersion: v1
kind: Namespace
metadata:
annotations: {}
labels:
app.kubernetes.io/component: rook-ceph
app.kubernetes.io/managed-by: commodore
app.kubernetes.io/name: syn-rook-ceph-cluster
name: syn-rook-ceph-cluster
name: syn-rook-ceph-cluster
115 changes: 115 additions & 0 deletions tests/golden/cephfs/rook-ceph/rook-ceph/01_aggregated_rbac.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations: {}
labels:
app.kubernetes.io/component: rook-ceph
app.kubernetes.io/managed-by: commodore
app.kubernetes.io/name: rook-ceph-view
name: rook-ceph-view
rbac.authorization.k8s.io/aggregate-to-admin: 'true'
rbac.authorization.k8s.io/aggregate-to-edit: 'true'
rbac.authorization.k8s.io/aggregate-to-view: 'true'
name: rook-ceph-view
rules:
- apiGroups:
- ceph.rook.io
resources:
- cephblockpoolradosnamespaces
- cephblockpools
- cephbucketnotifications
- cephbuckettopics
- cephclients
- cephclusters
- cephfilesystemmirrors
- cephfilesystems
- cephfilesystemsubvolumegroups
- cephnfss
- cephobjectrealms
- cephobjectstores
- cephobjectstoreusers
- cephobjectzonegroups
- cephobjectzones
- cephrbdmirrors
verbs:
- get
- list
- watch
- apiGroups:
- objectbucket.io
resources:
- objectbucketclaims
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations: {}
labels:
app.kubernetes.io/component: rook-ceph
app.kubernetes.io/managed-by: commodore
app.kubernetes.io/name: rook-ceph-edit
name: rook-ceph-edit
rbac.authorization.k8s.io/aggregate-to-admin: 'true'
rbac.authorization.k8s.io/aggregate-to-edit: 'true'
name: rook-ceph-edit
rules:
- apiGroups:
- ceph.rook.io
resources:
- cephblockpoolradosnamespaces
- cephblockpools
- cephbucketnotifications
- cephbuckettopics
- cephclients
- cephclusters
- cephfilesystemmirrors
- cephfilesystems
- cephfilesystemsubvolumegroups
- cephnfss
- cephobjectrealms
- cephobjectstores
- cephobjectstoreusers
- cephobjectzonegroups
- cephobjectzones
- cephrbdmirrors
verbs:
- create
- delete
- deletecollection
- patch
- update
- apiGroups:
- objectbucket.io
resources:
- objectbucketclaims
verbs:
- create
- delete
- deletecollection
- patch
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations: {}
labels:
app.kubernetes.io/component: rook-ceph
app.kubernetes.io/managed-by: commodore
app.kubernetes.io/name: rook-ceph-cluster-reader
name: rook-ceph-cluster-reader
rbac.authorization.k8s.io/aggregate-to-cluster-reader: 'true'
name: rook-ceph-cluster-reader
rules:
- apiGroups:
- objectbucket.io
resources:
- objectbuckets
verbs:
- get
- list
- watch
Loading
Loading