From d10ab8faa32bf8cb6149ddfaebfb7bd32cb4b0ca Mon Sep 17 00:00:00 2001 From: Derek Su Date: Wed, 4 Oct 2023 23:22:06 +0800 Subject: [PATCH 1/2] Remove nvme-cli installation from v1.5.2+ Longhorn 6798 Signed-off-by: Derek Su --- content/docs/1.5.2/spdk/quick-start.md | 58 ++------------------------ content/docs/1.6.0/spdk/quick-start.md | 58 ++------------------------ 2 files changed, 6 insertions(+), 110 deletions(-) diff --git a/content/docs/1.5.2/spdk/quick-start.md b/content/docs/1.5.2/spdk/quick-start.md index dc45483ef..97e5454a6 100644 --- a/content/docs/1.5.2/spdk/quick-start.md +++ b/content/docs/1.5.2/spdk/quick-start.md @@ -6,7 +6,7 @@ **Table of Contents** - [Prerequisites](#prerequisites) - [Configure Kernel Modules and Huge Pages](#configure-kernel-modules-and-huge-pages) - - [Install NVMe Userspace Tool and Load `nvme-tcp` Kernel Module](#install-nvme-userspace-tool-and-load-nvme-tcp-kernel-module) + - [Load `nvme-tcp` Kernel Module](#load-nvme-tcp-kernel-module) - [Load Kernel Modules Automatically on Boot](#load-kernel-modules-automatically-on-boot) - [Restart `kubelet`](#restart-kubelet) - [Check Environment](#check-environment) @@ -76,66 +76,14 @@ Or, you can install them manually by following these steps. echo "vm.nr_hugepages=512" >> /etc/sysctl.conf ``` -### Install NVMe Userspace Tool and Load `nvme-tcp` Kernel Module - -> **NOTICE:** -> -> Make sure that the version of `nvme-cli` is equal to or greater than `1.12`. -> -> If the version of `nvme-cli` installed by the below steps is not equal to or greater than `1.12`., you will need to compile the utility from the [source codes](https://github.com/linux-nvme/nvme-cli) and install it on each Longhorn node by manual. -> -> Also, install the **uuid development library** before compiling to support the `show-hostnqn` subcommand. -> -> For SUSE/OpenSUSE you can install it use this command: -> ``` -> zypper install uuid-devel -> ``` -> -> For Debian and Ubuntu, use this command: -> ``` -> apt install uuid-dev -> ``` -> -> For RHEL, CentOS, and EKS with `EKS Kubernetes Worker AMI with AmazonLinux2 image`, use this command: -> ``` -> yum install uuid-devel -> ``` -> - -And also can check the log with the following command to see the installation result -``` -nvme-cli install successfully -``` +### Load `nvme-tcp` Kernel Module We provide a manifest that helps you finish the deployment on each Longhorn node. ``` kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v{{< current-version >}}/deploy/prerequisite/longhorn-nvme-cli-installation.yaml ``` -Or, you can manually install them. -- Install nvme-cli on each node and make sure that the version of `nvme-cli` is equal to or greater than `1.12`. - - For SUSE/OpenSUSE you can install it use this command: - ``` - zypper install nvme-cli - ``` - - For Debian and Ubuntu, use this command: - ``` - apt install nvme-cli - ``` - - For RHEL, CentOS, and EKS with `EKS Kubernetes Worker AMI with AmazonLinux2 image`, use this command: - ``` - yum install nvme-cli - ``` - - To check the version of nvme-cli, execute the following command. - ``` - nvme version - ``` - -- Load `nvme-tcp` kernel module on the each Longhorn node +Or, you can manually load `nvme-tcp` kernel module on the each Longhorn node ``` modprobe nvme-tcp ``` diff --git a/content/docs/1.6.0/spdk/quick-start.md b/content/docs/1.6.0/spdk/quick-start.md index dc45483ef..97e5454a6 100644 --- a/content/docs/1.6.0/spdk/quick-start.md +++ b/content/docs/1.6.0/spdk/quick-start.md @@ -6,7 +6,7 @@ **Table of Contents** - [Prerequisites](#prerequisites) - [Configure Kernel Modules and Huge Pages](#configure-kernel-modules-and-huge-pages) - - [Install NVMe Userspace Tool and Load `nvme-tcp` Kernel Module](#install-nvme-userspace-tool-and-load-nvme-tcp-kernel-module) + - [Load `nvme-tcp` Kernel Module](#load-nvme-tcp-kernel-module) - [Load Kernel Modules Automatically on Boot](#load-kernel-modules-automatically-on-boot) - [Restart `kubelet`](#restart-kubelet) - [Check Environment](#check-environment) @@ -76,66 +76,14 @@ Or, you can install them manually by following these steps. echo "vm.nr_hugepages=512" >> /etc/sysctl.conf ``` -### Install NVMe Userspace Tool and Load `nvme-tcp` Kernel Module - -> **NOTICE:** -> -> Make sure that the version of `nvme-cli` is equal to or greater than `1.12`. -> -> If the version of `nvme-cli` installed by the below steps is not equal to or greater than `1.12`., you will need to compile the utility from the [source codes](https://github.com/linux-nvme/nvme-cli) and install it on each Longhorn node by manual. -> -> Also, install the **uuid development library** before compiling to support the `show-hostnqn` subcommand. -> -> For SUSE/OpenSUSE you can install it use this command: -> ``` -> zypper install uuid-devel -> ``` -> -> For Debian and Ubuntu, use this command: -> ``` -> apt install uuid-dev -> ``` -> -> For RHEL, CentOS, and EKS with `EKS Kubernetes Worker AMI with AmazonLinux2 image`, use this command: -> ``` -> yum install uuid-devel -> ``` -> - -And also can check the log with the following command to see the installation result -``` -nvme-cli install successfully -``` +### Load `nvme-tcp` Kernel Module We provide a manifest that helps you finish the deployment on each Longhorn node. ``` kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v{{< current-version >}}/deploy/prerequisite/longhorn-nvme-cli-installation.yaml ``` -Or, you can manually install them. -- Install nvme-cli on each node and make sure that the version of `nvme-cli` is equal to or greater than `1.12`. - - For SUSE/OpenSUSE you can install it use this command: - ``` - zypper install nvme-cli - ``` - - For Debian and Ubuntu, use this command: - ``` - apt install nvme-cli - ``` - - For RHEL, CentOS, and EKS with `EKS Kubernetes Worker AMI with AmazonLinux2 image`, use this command: - ``` - yum install nvme-cli - ``` - - To check the version of nvme-cli, execute the following command. - ``` - nvme version - ``` - -- Load `nvme-tcp` kernel module on the each Longhorn node +Or, you can manually load `nvme-tcp` kernel module on the each Longhorn node ``` modprobe nvme-tcp ``` From 9f371adfd4e4892a98bf7918d139dcf339620b69 Mon Sep 17 00:00:00 2001 From: Derek Su Date: Wed, 4 Oct 2023 23:27:52 +0800 Subject: [PATCH 2/2] Fix typo Signed-off-by: Derek Su --- content/docs/1.4.0/references/storage-class-parameters.md | 2 +- content/docs/1.4.1/references/storage-class-parameters.md | 2 +- content/docs/1.4.2/references/storage-class-parameters.md | 2 +- content/docs/1.4.3/references/storage-class-parameters.md | 2 +- content/docs/1.4.4/references/storage-class-parameters.md | 2 +- content/docs/1.5.0/deploy/important-notes/index.md | 2 +- content/docs/1.5.0/references/storage-class-parameters.md | 2 +- content/docs/1.5.1/deploy/important-notes/index.md | 2 +- content/docs/1.5.1/references/storage-class-parameters.md | 2 +- content/docs/1.5.2/deploy/important-notes/index.md | 2 +- content/docs/1.5.2/references/storage-class-parameters.md | 2 +- content/docs/1.6.0/deploy/important-notes/index.md | 2 +- content/docs/1.6.0/references/storage-class-parameters.md | 2 +- .../backup-and-restore/backupstores-and-backuptargets.md | 2 +- content/kb/space-consumption-guideline.md | 2 +- 15 files changed, 15 insertions(+), 15 deletions(-) diff --git a/content/docs/1.4.0/references/storage-class-parameters.md b/content/docs/1.4.0/references/storage-class-parameters.md index 9aade7aee..09402b9b6 100644 --- a/content/docs/1.4.0/references/storage-class-parameters.md +++ b/content/docs/1.4.0/references/storage-class-parameters.md @@ -114,7 +114,7 @@ If enabled, try to keep the data on the same node as the workload for better per - If "strict-local" is not possible for whatever other reason, volume creation will be failed. A "strict-local" replica that becomes displaced from its workload will be marked as "Stopped". > Global setting: [Default Data Locality](../settings#default-data-locality) -> More defails in [Data Locality](../../high-availability/data-locality). +> More details in [Data Locality](../../high-availability/data-locality). #### Replica Auto-Balance *(field: `parameters.replicaAutoBalance`)* > Default: `ignored` diff --git a/content/docs/1.4.1/references/storage-class-parameters.md b/content/docs/1.4.1/references/storage-class-parameters.md index 9aade7aee..09402b9b6 100644 --- a/content/docs/1.4.1/references/storage-class-parameters.md +++ b/content/docs/1.4.1/references/storage-class-parameters.md @@ -114,7 +114,7 @@ If enabled, try to keep the data on the same node as the workload for better per - If "strict-local" is not possible for whatever other reason, volume creation will be failed. A "strict-local" replica that becomes displaced from its workload will be marked as "Stopped". > Global setting: [Default Data Locality](../settings#default-data-locality) -> More defails in [Data Locality](../../high-availability/data-locality). +> More details in [Data Locality](../../high-availability/data-locality). #### Replica Auto-Balance *(field: `parameters.replicaAutoBalance`)* > Default: `ignored` diff --git a/content/docs/1.4.2/references/storage-class-parameters.md b/content/docs/1.4.2/references/storage-class-parameters.md index 9aade7aee..09402b9b6 100644 --- a/content/docs/1.4.2/references/storage-class-parameters.md +++ b/content/docs/1.4.2/references/storage-class-parameters.md @@ -114,7 +114,7 @@ If enabled, try to keep the data on the same node as the workload for better per - If "strict-local" is not possible for whatever other reason, volume creation will be failed. A "strict-local" replica that becomes displaced from its workload will be marked as "Stopped". > Global setting: [Default Data Locality](../settings#default-data-locality) -> More defails in [Data Locality](../../high-availability/data-locality). +> More details in [Data Locality](../../high-availability/data-locality). #### Replica Auto-Balance *(field: `parameters.replicaAutoBalance`)* > Default: `ignored` diff --git a/content/docs/1.4.3/references/storage-class-parameters.md b/content/docs/1.4.3/references/storage-class-parameters.md index 9aade7aee..09402b9b6 100644 --- a/content/docs/1.4.3/references/storage-class-parameters.md +++ b/content/docs/1.4.3/references/storage-class-parameters.md @@ -114,7 +114,7 @@ If enabled, try to keep the data on the same node as the workload for better per - If "strict-local" is not possible for whatever other reason, volume creation will be failed. A "strict-local" replica that becomes displaced from its workload will be marked as "Stopped". > Global setting: [Default Data Locality](../settings#default-data-locality) -> More defails in [Data Locality](../../high-availability/data-locality). +> More details in [Data Locality](../../high-availability/data-locality). #### Replica Auto-Balance *(field: `parameters.replicaAutoBalance`)* > Default: `ignored` diff --git a/content/docs/1.4.4/references/storage-class-parameters.md b/content/docs/1.4.4/references/storage-class-parameters.md index 9aade7aee..09402b9b6 100644 --- a/content/docs/1.4.4/references/storage-class-parameters.md +++ b/content/docs/1.4.4/references/storage-class-parameters.md @@ -114,7 +114,7 @@ If enabled, try to keep the data on the same node as the workload for better per - If "strict-local" is not possible for whatever other reason, volume creation will be failed. A "strict-local" replica that becomes displaced from its workload will be marked as "Stopped". > Global setting: [Default Data Locality](../settings#default-data-locality) -> More defails in [Data Locality](../../high-availability/data-locality). +> More details in [Data Locality](../../high-availability/data-locality). #### Replica Auto-Balance *(field: `parameters.replicaAutoBalance`)* > Default: `ignored` diff --git a/content/docs/1.5.0/deploy/important-notes/index.md b/content/docs/1.5.0/deploy/important-notes/index.md index d050ce481..7c9840268 100644 --- a/content/docs/1.5.0/deploy/important-notes/index.md +++ b/content/docs/1.5.0/deploy/important-notes/index.md @@ -86,7 +86,7 @@ Please use the new setting [Node Drain Policy](../../references/settings#node-dr ### Instance Managers Consolidated -Engine instance mangers and replica instance managers has been consolidated. Previous engine/replica instance managers are now deprecated, but they will still provide service to the existing attached volumes. +Engine instance managers and replica instance managers has been consolidated. Previous engine/replica instance managers are now deprecated, but they will still provide service to the existing attached volumes. The `Guaranteed Engine Manager CPU` and `Guaranteed Replica Manager CPU` settings are removed and replaced by `Guaranteed Instance Manager CPU`. diff --git a/content/docs/1.5.0/references/storage-class-parameters.md b/content/docs/1.5.0/references/storage-class-parameters.md index edb6e8cf7..a971893fb 100644 --- a/content/docs/1.5.0/references/storage-class-parameters.md +++ b/content/docs/1.5.0/references/storage-class-parameters.md @@ -115,7 +115,7 @@ If enabled, try to keep the data on the same node as the workload for better per - If "strict-local" is not possible for whatever other reason, volume creation will be failed. A "strict-local" replica that becomes displaced from its workload will be marked as "Stopped". > Global setting: [Default Data Locality](../settings#default-data-locality) -> More defails in [Data Locality](../../high-availability/data-locality). +> More details in [Data Locality](../../high-availability/data-locality). #### Replica Auto-Balance *(field: `parameters.replicaAutoBalance`)* > Default: `ignored` diff --git a/content/docs/1.5.1/deploy/important-notes/index.md b/content/docs/1.5.1/deploy/important-notes/index.md index d050ce481..7c9840268 100644 --- a/content/docs/1.5.1/deploy/important-notes/index.md +++ b/content/docs/1.5.1/deploy/important-notes/index.md @@ -86,7 +86,7 @@ Please use the new setting [Node Drain Policy](../../references/settings#node-dr ### Instance Managers Consolidated -Engine instance mangers and replica instance managers has been consolidated. Previous engine/replica instance managers are now deprecated, but they will still provide service to the existing attached volumes. +Engine instance managers and replica instance managers has been consolidated. Previous engine/replica instance managers are now deprecated, but they will still provide service to the existing attached volumes. The `Guaranteed Engine Manager CPU` and `Guaranteed Replica Manager CPU` settings are removed and replaced by `Guaranteed Instance Manager CPU`. diff --git a/content/docs/1.5.1/references/storage-class-parameters.md b/content/docs/1.5.1/references/storage-class-parameters.md index edb6e8cf7..a971893fb 100644 --- a/content/docs/1.5.1/references/storage-class-parameters.md +++ b/content/docs/1.5.1/references/storage-class-parameters.md @@ -115,7 +115,7 @@ If enabled, try to keep the data on the same node as the workload for better per - If "strict-local" is not possible for whatever other reason, volume creation will be failed. A "strict-local" replica that becomes displaced from its workload will be marked as "Stopped". > Global setting: [Default Data Locality](../settings#default-data-locality) -> More defails in [Data Locality](../../high-availability/data-locality). +> More details in [Data Locality](../../high-availability/data-locality). #### Replica Auto-Balance *(field: `parameters.replicaAutoBalance`)* > Default: `ignored` diff --git a/content/docs/1.5.2/deploy/important-notes/index.md b/content/docs/1.5.2/deploy/important-notes/index.md index 2767ddba6..e2305db94 100644 --- a/content/docs/1.5.2/deploy/important-notes/index.md +++ b/content/docs/1.5.2/deploy/important-notes/index.md @@ -97,7 +97,7 @@ Please use the new setting [Node Drain Policy](../../references/settings#node-dr ### Instance Managers Consolidated -Engine instance mangers and replica instance managers has been consolidated. Previous engine/replica instance managers are now deprecated, but they will still provide service to the existing attached volumes. +Engine instance managers and replica instance managers has been consolidated. Previous engine/replica instance managers are now deprecated, but they will still provide service to the existing attached volumes. The `Guaranteed Engine Manager CPU` and `Guaranteed Replica Manager CPU` settings are removed and replaced by `Guaranteed Instance Manager CPU`. diff --git a/content/docs/1.5.2/references/storage-class-parameters.md b/content/docs/1.5.2/references/storage-class-parameters.md index edb6e8cf7..a971893fb 100644 --- a/content/docs/1.5.2/references/storage-class-parameters.md +++ b/content/docs/1.5.2/references/storage-class-parameters.md @@ -115,7 +115,7 @@ If enabled, try to keep the data on the same node as the workload for better per - If "strict-local" is not possible for whatever other reason, volume creation will be failed. A "strict-local" replica that becomes displaced from its workload will be marked as "Stopped". > Global setting: [Default Data Locality](../settings#default-data-locality) -> More defails in [Data Locality](../../high-availability/data-locality). +> More details in [Data Locality](../../high-availability/data-locality). #### Replica Auto-Balance *(field: `parameters.replicaAutoBalance`)* > Default: `ignored` diff --git a/content/docs/1.6.0/deploy/important-notes/index.md b/content/docs/1.6.0/deploy/important-notes/index.md index 019025f4b..94017eefa 100644 --- a/content/docs/1.6.0/deploy/important-notes/index.md +++ b/content/docs/1.6.0/deploy/important-notes/index.md @@ -97,7 +97,7 @@ Please use the new setting [Node Drain Policy](../../references/settings#node-dr ### Instance Managers Consolidated -Engine instance mangers and replica instance managers has been consolidated. Previous engine/replica instance managers are now deprecated, but they will still provide service to the existing attached volumes. +Engine instance managers and replica instance managers has been consolidated. Previous engine/replica instance managers are now deprecated, but they will still provide service to the existing attached volumes. The `Guaranteed Engine Manager CPU` and `Guaranteed Replica Manager CPU` settings are removed and replaced by `Guaranteed Instance Manager CPU`. diff --git a/content/docs/1.6.0/references/storage-class-parameters.md b/content/docs/1.6.0/references/storage-class-parameters.md index 8214a1b18..0a37044d6 100644 --- a/content/docs/1.6.0/references/storage-class-parameters.md +++ b/content/docs/1.6.0/references/storage-class-parameters.md @@ -116,7 +116,7 @@ If enabled, try to keep the data on the same node as the workload for better per - If "strict-local" is not possible for whatever other reason, volume creation will be failed. A "strict-local" replica that becomes displaced from its workload will be marked as "Stopped". > Global setting: [Default Data Locality](../settings#default-data-locality) -> More defails in [Data Locality](../../high-availability/data-locality). +> More details in [Data Locality](../../high-availability/data-locality). #### Replica Auto-Balance *(field: `parameters.replicaAutoBalance`)* > Default: `ignored` diff --git a/content/docs/archives/0.8.0/users-guide/backup-and-restore/backupstores-and-backuptargets.md b/content/docs/archives/0.8.0/users-guide/backup-and-restore/backupstores-and-backuptargets.md index 36a5d3088..1feb21fbd 100644 --- a/content/docs/archives/0.8.0/users-guide/backup-and-restore/backupstores-and-backuptargets.md +++ b/content/docs/archives/0.8.0/users-guide/backup-and-restore/backupstores-and-backuptargets.md @@ -83,7 +83,7 @@ Now set `Settings/General/BackupTarget` to ``` s3://backupbucket@us-east-1/ ``` -And `Setttings/General/BackupTargetSecret` to +And `Settings/General/BackupTargetSecret` to ``` minio-secret ``` diff --git a/content/kb/space-consumption-guideline.md b/content/kb/space-consumption-guideline.md index 6a85e68b7..5574d96a5 100644 --- a/content/kb/space-consumption-guideline.md +++ b/content/kb/space-consumption-guideline.md @@ -34,5 +34,5 @@ In this case, the node is probably marked as NotReady due to the disk pressure. To do recover nodes and disk, we would recommend directly removing some redundant replica directories for the full disk. Here redundant replicas means that the corresponding volumes have healthy replicas in other disks. Later on Longhorn will automatically rebuild new replicas in other disks if possible. Besides, users may need to expand the existing disks or add more disks to avoid future disk exhaustion issues. -Notice that the disk exhausion may be caused by replicas being unevenly scheduled. Users can check [setting Replica Auto Balance](../../docs/1.5.1/high-availability/auto-balance-replicas) for this scenario. +Notice that the disk exhaustion may be caused by replicas being unevenly scheduled. Users can check [setting Replica Auto Balance](../../docs/1.5.1/high-availability/auto-balance-replicas) for this scenario.