Skip to content

Commit

Permalink
Upgrade Rook to 1.11.11 (#124)
Browse files Browse the repository at this point in the history
* Upgrade Rook to 1.11.8

Remove configuration for removed machine disruption config on OCP

* Update runbooks

* Revert unnecesasry runbook changes

---------

Co-authored-by: Stephan Feurer <[email protected]>
  • Loading branch information
simu and Stephan Feurer authored Nov 2, 2023
1 parent a20e483 commit 2a75789
Show file tree
Hide file tree
Showing 52 changed files with 2,822 additions and 11,596 deletions.
8 changes: 4 additions & 4 deletions class/defaults.yml
Original file line number Diff line number Diff line change
Expand Up @@ -225,23 +225,23 @@ parameters:
rook:
registry: docker.io
image: rook/ceph
tag: v1.10.13
tag: v1.11.11
ceph:
registry: quay.io
image: ceph/ceph
tag: v17.2.6
cephcsi:
registry: quay.io
image: cephcsi/cephcsi
tag: v3.7.2
tag: v3.8.1
kubectl:
registry: docker.io
image: bitnami/kubectl
tag: '1.25.5@sha256:19dff0248157ae4cd320097ace1b5e0ffbb8bc7c7ea7fa3f13f73993fc6d7ee2'
tag: '1.26.10@sha256:056c0c09241d5c6ae97ff782ebee9e1cc73363583ff93067679702c72e11c77a'

charts:
# We do not support helm chart versions older than v1.7.0
rook-ceph: v1.10.13
rook-ceph: v1.11.11

operator_helm_values:
image:
Expand Down
4 changes: 0 additions & 4 deletions component/cephcluster.libsonnet
Original file line number Diff line number Diff line change
Expand Up @@ -225,10 +225,6 @@ local cephcluster =
placement: {
all: helpers.nodeAffinity,
},
disruptionManagement: {
manageMachineDisruptionBudgets: on_openshift,
machineDisruptionBudgetNamespace: 'openshift-machine-api',
},
storage+: {
storageClassDeviceSets: std.filter(
function(it) it != null,
Expand Down
16 changes: 16 additions & 0 deletions docs/modules/ROOT/pages/runbooks/CephDaemonSlowOps.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
= Alert rule: CephDaemonSlowOps

include::partial$runbooks/contribution_note.adoc[]

== icon:glasses[] Overview

One or more OSD requests or monitor requests are taking a long time to process.
This alert might be an indication of extreme load, a slow storage device, or a software bug.

== icon:bug[] Steps for debugging

// Add detailed steps to debug and resolve the issue

== icon:book[] Upstream documentation

https://docs.ceph.com/en/latest/rados/operations/health-checks#slow-ops
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,9 @@ include::partial$runbooks/contribution_note.adoc[]
== icon:glasses[] Overview

The device health module has determined that one or more devices will fail soon.
To review device status use `ceph device ls`. To show a specific device use `ceph device info <dev id>`.
Mark the OSD out so that data may migrate to other OSDs.
Once the OSD has drained, destroy the OSD, replace the device, and redeploy the OSD.

== icon:bug[] Steps for debugging

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@ include::partial$runbooks/contribution_note.adoc[]
The minimum number of standby daemons required by `standby_count_wanted` is less than the current number of standby daemons.
Adjust the standby count or increase the number of MDS daemons.


== icon:bug[] Steps for debugging

include::partial$runbooks/check_missing_mds.adoc[]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ include::partial$runbooks/contribution_note.adoc[]

== icon:glasses[] Overview

The filesystem's "max_mds" setting defined the number of MDS ranks in the filesystem.
The filesystem's `max_mds` setting defined the number of MDS ranks in the filesystem.
The current number of active MDS daemons is less than this setting.

== icon:bug[] Steps for debugging
Expand Down
3 changes: 2 additions & 1 deletion docs/modules/ROOT/pages/runbooks/CephMgrModuleCrash.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,9 @@ include::partial$runbooks/contribution_note.adoc[]

== icon:glasses[] Overview

One or more mgr modules have crashed and are yet to be acknowledged.
One or more mgr modules have crashed and have yet to be acknowledged.
A crashed module may impact functionality within the cluster.
Use the `ceph crash` command to determine which module has failed, and archive it to acknowledge the failure.

== icon:bug[] Steps for debugging

Expand Down
7 changes: 2 additions & 5 deletions docs/modules/ROOT/pages/runbooks/CephMonClockSkew.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,8 @@ include::partial$runbooks/contribution_note.adoc[]

== icon:glasses[] Overview

The ceph monitors rely on a consistent time reference to maintain quorum and cluster consistency.
This event indicates that at least one of your MONs isn't sync'd correctly.
The Ceph monitors rely on closely synchronized time to maintain quorum and cluster consistency.
This alert indicates that time on at least one mon has drifted too far from the lead mon.

Ceph monitors rely on closely synchronized time to maintain quorum and cluster consistency.
This event indicates that the time on at least one mon has drifted too far from the lead mon.

== icon:bug[] Steps for debugging

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,3 @@ Node has a different MTU size than the median value on device.
== icon:bug[] Steps for debugging

// Add detailed steps to debug and resolve the issue

Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ include::partial$runbooks/contribution_note.adoc[]

== icon:glasses[] Overview

Node experiences packet drop > 0.01% or > 10 packets/s on interface.
Node experiences packet drop > 0.5% or > 10 packets/s on interface.

== icon:bug[] Steps for debugging

Expand Down
8 changes: 4 additions & 4 deletions docs/modules/ROOT/pages/runbooks/CephOSDBackfillFull.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,10 @@ include::partial$runbooks/contribution_note.adoc[]

== icon:glasses[] Overview

An OSD has reached it's BACKFILL FULL threshold.
This will prevent rebalance operations completing for some pools. Check the current capacity utilisation with 'ceph df'

To resolve, either add capacity to the cluster, or delete unwanted data
An OSD has reached the BACKFILL FULL threshold.
This will prevent rebalance operations from completing.
Use `ceph health detail` and `ceph osd df` to identify the problem.
To resolve, add capacity to the affected OSD's failure domain, restore down/out OSDs, or delete unwanted data.

== icon:bug[] Steps for debugging

Expand Down
4 changes: 3 additions & 1 deletion docs/modules/ROOT/pages/runbooks/CephOSDFlapping.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,9 @@ include::partial$runbooks/contribution_note.adoc[]

== icon:glasses[] Overview

This alert fires if a Ceph OSD pod has restarted five or more times in the last five minutes.
This alert fires if a Ceph OSD pod was marked down and back up at least once a minute for 5 minutes.
This may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster network, or the public network if no cluster network is deployed.
Check the network stats on the listed host(s).

== icon:bug[] Steps for debugging

Expand Down
1 change: 0 additions & 1 deletion docs/modules/ROOT/pages/runbooks/CephOSDFull.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@ To resolve this alert, unused data should be deleted or the cluster size must be

== icon:bug[] Steps for debugging

// Add detailed steps to debug and resolve the issue
See the how-to on xref:how-tos/scale-cluster.adoc[scaling a PVC-based Ceph cluster] for instructions to resize the cluster.

== icon:book[] Upstream documentation
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@ include::partial$runbooks/contribution_note.adoc[]

== icon:glasses[] Overview

One or more OSDs have an internal inconsistency between the size of the physical device and it's metadata.
One or more OSDs have an internal inconsistency between metadata and the size of the device.
This could lead to the OSD(s) crashing in future.
You should redeploy the effected OSDs.
You should redeploy the affected OSDs.

== icon:bug[] Steps for debugging

Expand Down
2 changes: 1 addition & 1 deletion docs/modules/ROOT/pages/runbooks/CephOSDNearFull.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,14 @@ include::partial$runbooks/contribution_note.adoc[]
== icon:glasses[] Overview

This alert fires when utilization of a Ceph storage device (disk) is higher than 75% of the device's capacity.

This alert may indicate that the cluster utilization will soon reach problematic levels.
If this alert is caused by high cluster utilization, unused data should be deleted or the cluster size must be increased.

Otherwise, investigate why this particular device has higher utilization than the other storage devices in the Ceph cluster.

== icon:bug[] Steps for debugging

// Add detailed steps to debug and resolve the issue
See the how-to on xref:how-tos/scale-cluster.adoc[scaling a PVC-based Ceph cluster] for instructions to resize the cluster.

== icon:book[] Upstream documentation
Expand Down
2 changes: 1 addition & 1 deletion docs/modules/ROOT/pages/runbooks/CephOSDReadErrors.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ include::partial$runbooks/contribution_note.adoc[]
== icon:glasses[] Overview

An OSD has encountered read errors, but the OSD has recovered by retrying the reads.
This may indicate an issue with the Hardware or Kernel.
This may indicate an issue with hardware or the kernel.

== icon:bug[] Steps for debugging

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ include::partial$runbooks/contribution_note.adoc[]

== icon:glasses[] Overview

OSD heartbeats on the cluster's 'cluster' network are running slow.
Investigate the network for any latency issues on this subnet.
OSD heartbeats on the cluster's `cluster` network (backend) are slow.
Investigate the network for latency issues on this subnet.

== icon:bug[] Steps for debugging

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ include::partial$runbooks/contribution_note.adoc[]

== icon:glasses[] Overview

OSD heartbeats on the cluster's 'public' network (frontend) are running slow.
Investigate the network for any latency issues on this subnet.
OSD heartbeats on the cluster's `public` network (frontend) are running slow.
Investigate the network for latency or loss issues.

== icon:bug[] Steps for debugging

Expand Down
2 changes: 1 addition & 1 deletion docs/modules/ROOT/pages/runbooks/CephObjectMissing.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ include::partial$runbooks/contribution_note.adoc[]

== icon:glasses[] Overview

A version of a RADOS object can not be found, even though all OSDs are up.
The latest version of a RADOS object can not be found, even though all OSDs are up.
I/O requests for this object from clients will block (hang).
Resolving this issue may require the object to be rolled back to a prior version manually, and manually verified.

Expand Down
2 changes: 1 addition & 1 deletion docs/modules/ROOT/pages/runbooks/CephPGBackfillAtRisk.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ include::partial$runbooks/contribution_note.adoc[]
== icon:glasses[] Overview

Data redundancy may be at risk due to lack of free space within the cluster.
One or more OSDs have breached their 'backfillfull' threshold.
One or more OSDs have reached the `backfillfull` threshold.
Add more capacity, or delete unwanted data.

== icon:bug[] Steps for debugging
Expand Down
1 change: 0 additions & 1 deletion docs/modules/ROOT/pages/runbooks/CephPGImbalance.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,3 @@ An alert is fired if an OSD deviates by more than 30% from average PG count.
== icon:bug[] Steps for debugging

// Add detailed steps to debug and resolve the issue

6 changes: 3 additions & 3 deletions docs/modules/ROOT/pages/runbooks/CephPGNotDeepScrubbed.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@ include::partial$runbooks/contribution_note.adoc[]
== icon:glasses[] Overview

One or more PGs haven't been deep scrubbed recently.
Deep scrub is a data integrity feature, protecting against bit-rot.
It compares the contents of objects and their replicas for inconsistency.
When PGs miss their deep scrub window, it may indicate that the window is too small or PGs weren't in a 'clean' state during the deep-scrub window.
Deep scrubs protect against bit-rot.
They compare data replicas to ensure consistency.
When PGs miss their deep scrub interval, it may indicate that the window is too small or PGs weren't in a `clean` state during the deep-scrub window.

== icon:bug[] Steps for debugging

Expand Down
8 changes: 5 additions & 3 deletions docs/modules/ROOT/pages/runbooks/CephPGNotScrubbed.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,14 @@ include::partial$runbooks/contribution_note.adoc[]
== icon:glasses[] Overview

One or more PGs haven't been scrubbed recently.
The scrub process is a data integrity feature, protectng against bit-rot.
It checks that objects and their metadata (size and attributes) match across object replicas.
When PGs miss their scrub window, it may indicate the scrub window is too small, or PGs weren't in a 'clean' state during the scrub window.
Scrubs check metadata integrity, protecting against bit-rot.
They check that metadata is consistent across data replicas.
When PGs miss their scrub interval, it may indicate that the scrub window is too small, or PGs weren't in a `clean` state during the scrub window.

== icon:bug[] Steps for debugging

=== Initiate a scrub

[source,console]
----
$ ceph_cluster_ns=syn-rook-ceph-cluster
Expand Down
4 changes: 2 additions & 2 deletions docs/modules/ROOT/pages/runbooks/CephPGRecoveryAtRisk.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ include::partial$runbooks/contribution_note.adoc[]

== icon:glasses[] Overview

Data redundancy may be reduced, or is at risk, since one or more OSDs are at or above their 'full' threshold.
Add more capacity to the cluster, or delete unwanted data.
Data redundancy is at risk since one or more OSDs are at or above the `full` threshold.
Add more capacity to the cluster, restore down/out OSDs, or delete unwanted data.

== icon:bug[] Steps for debugging

Expand Down
4 changes: 2 additions & 2 deletions docs/modules/ROOT/pages/runbooks/CephPGsUnclean.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ include::partial$runbooks/contribution_note.adoc[]

== icon:glasses[] Overview

PGs haven't been clean for more than 15 minutes in pool.
Unclean PGs haven't been able to completely recover from a previous failure.
PGs have been unclean for more than 15 minutes in a pool.
Unclean PGs haven't recovered from a previous failure.

== icon:bug[] Steps for debugging

Expand Down
1 change: 0 additions & 1 deletion docs/modules/ROOT/pages/runbooks/CephPoolBackfillFull.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,5 +11,4 @@ To resolve this alert, unused data should be deleted or the cluster size must be

== icon:bug[] Steps for debugging

// Add detailed steps to debug and resolve the issue
See the how-to on xref:how-tos/scale-cluster.adoc[scaling a PVC-based Ceph cluster] for instructions to resize the cluster.
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ metadata:
app.kubernetes.io/created-by: helm
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: rook-ceph-operator
helm.sh/chart: rook-ceph-v1.10.13
helm.sh/chart: rook-ceph-v1.11.11
operator: rook
storage-backend: ceph
name: rook-ceph-osd
Expand All @@ -18,7 +18,7 @@ metadata:
app.kubernetes.io/created-by: helm
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: rook-ceph-operator
helm.sh/chart: rook-ceph-v1.10.13
helm.sh/chart: rook-ceph-v1.11.11
operator: rook
storage-backend: ceph
name: rook-ceph-mgr
Expand All @@ -31,7 +31,7 @@ metadata:
app.kubernetes.io/created-by: helm
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: rook-ceph-operator
helm.sh/chart: rook-ceph-v1.10.13
helm.sh/chart: rook-ceph-v1.11.11
operator: rook
storage-backend: ceph
name: rook-ceph-cmd-reporter
Expand All @@ -50,7 +50,7 @@ metadata:
app.kubernetes.io/created-by: helm
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: rook-ceph-operator
helm.sh/chart: rook-ceph-v1.10.13
helm.sh/chart: rook-ceph-v1.11.11
operator: rook
storage-backend: ceph
name: rook-ceph-rgw
Expand Down Expand Up @@ -94,6 +94,7 @@ rules:
- secrets
verbs:
- get
- update
- apiGroups:
- ''
resources:
Expand Down
Loading

0 comments on commit 2a75789

Please sign in to comment.