Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove nvme-cli installation from v1.5.2+ #782

Merged
merged 3 commits into from
Oct 4, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ If enabled, try to keep the data on the same node as the workload for better per
- If "strict-local" is not possible for whatever other reason, volume creation will be failed. A "strict-local" replica that becomes displaced from its workload will be marked as "Stopped".

> Global setting: [Default Data Locality](../settings#default-data-locality)
> More defails in [Data Locality](../../high-availability/data-locality).
> More details in [Data Locality](../../high-availability/data-locality).
#### Replica Auto-Balance *(field: `parameters.replicaAutoBalance`)*
> Default: `ignored`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ If enabled, try to keep the data on the same node as the workload for better per
- If "strict-local" is not possible for whatever other reason, volume creation will be failed. A "strict-local" replica that becomes displaced from its workload will be marked as "Stopped".

> Global setting: [Default Data Locality](../settings#default-data-locality)
> More defails in [Data Locality](../../high-availability/data-locality).
> More details in [Data Locality](../../high-availability/data-locality).
#### Replica Auto-Balance *(field: `parameters.replicaAutoBalance`)*
> Default: `ignored`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ If enabled, try to keep the data on the same node as the workload for better per
- If "strict-local" is not possible for whatever other reason, volume creation will be failed. A "strict-local" replica that becomes displaced from its workload will be marked as "Stopped".

> Global setting: [Default Data Locality](../settings#default-data-locality)
> More defails in [Data Locality](../../high-availability/data-locality).
> More details in [Data Locality](../../high-availability/data-locality).
#### Replica Auto-Balance *(field: `parameters.replicaAutoBalance`)*
> Default: `ignored`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ If enabled, try to keep the data on the same node as the workload for better per
- If "strict-local" is not possible for whatever other reason, volume creation will be failed. A "strict-local" replica that becomes displaced from its workload will be marked as "Stopped".

> Global setting: [Default Data Locality](../settings#default-data-locality)
> More defails in [Data Locality](../../high-availability/data-locality).
> More details in [Data Locality](../../high-availability/data-locality).
#### Replica Auto-Balance *(field: `parameters.replicaAutoBalance`)*
> Default: `ignored`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ If enabled, try to keep the data on the same node as the workload for better per
- If "strict-local" is not possible for whatever other reason, volume creation will be failed. A "strict-local" replica that becomes displaced from its workload will be marked as "Stopped".

> Global setting: [Default Data Locality](../settings#default-data-locality)
> More defails in [Data Locality](../../high-availability/data-locality).
> More details in [Data Locality](../../high-availability/data-locality).
#### Replica Auto-Balance *(field: `parameters.replicaAutoBalance`)*
> Default: `ignored`
Expand Down
2 changes: 1 addition & 1 deletion content/docs/1.5.0/deploy/important-notes/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ Please use the new setting [Node Drain Policy](../../references/settings#node-dr

### Instance Managers Consolidated

Engine instance mangers and replica instance managers has been consolidated. Previous engine/replica instance managers are now deprecated, but they will still provide service to the existing attached volumes.
Engine instance managers and replica instance managers has been consolidated. Previous engine/replica instance managers are now deprecated, but they will still provide service to the existing attached volumes.

The `Guaranteed Engine Manager CPU` and `Guaranteed Replica Manager CPU` settings are removed and replaced by `Guaranteed Instance Manager CPU`.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ If enabled, try to keep the data on the same node as the workload for better per
- If "strict-local" is not possible for whatever other reason, volume creation will be failed. A "strict-local" replica that becomes displaced from its workload will be marked as "Stopped".

> Global setting: [Default Data Locality](../settings#default-data-locality)
> More defails in [Data Locality](../../high-availability/data-locality).
> More details in [Data Locality](../../high-availability/data-locality).
#### Replica Auto-Balance *(field: `parameters.replicaAutoBalance`)*
> Default: `ignored`
Expand Down
2 changes: 1 addition & 1 deletion content/docs/1.5.1/deploy/important-notes/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ Please use the new setting [Node Drain Policy](../../references/settings#node-dr

### Instance Managers Consolidated

Engine instance mangers and replica instance managers has been consolidated. Previous engine/replica instance managers are now deprecated, but they will still provide service to the existing attached volumes.
Engine instance managers and replica instance managers has been consolidated. Previous engine/replica instance managers are now deprecated, but they will still provide service to the existing attached volumes.

The `Guaranteed Engine Manager CPU` and `Guaranteed Replica Manager CPU` settings are removed and replaced by `Guaranteed Instance Manager CPU`.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ If enabled, try to keep the data on the same node as the workload for better per
- If "strict-local" is not possible for whatever other reason, volume creation will be failed. A "strict-local" replica that becomes displaced from its workload will be marked as "Stopped".

> Global setting: [Default Data Locality](../settings#default-data-locality)
> More defails in [Data Locality](../../high-availability/data-locality).
> More details in [Data Locality](../../high-availability/data-locality).
#### Replica Auto-Balance *(field: `parameters.replicaAutoBalance`)*
> Default: `ignored`
Expand Down
2 changes: 1 addition & 1 deletion content/docs/1.5.2/deploy/important-notes/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ Please use the new setting [Node Drain Policy](../../references/settings#node-dr

### Instance Managers Consolidated

Engine instance mangers and replica instance managers has been consolidated. Previous engine/replica instance managers are now deprecated, but they will still provide service to the existing attached volumes.
Engine instance managers and replica instance managers has been consolidated. Previous engine/replica instance managers are now deprecated, but they will still provide service to the existing attached volumes.

The `Guaranteed Engine Manager CPU` and `Guaranteed Replica Manager CPU` settings are removed and replaced by `Guaranteed Instance Manager CPU`.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ If enabled, try to keep the data on the same node as the workload for better per
- If "strict-local" is not possible for whatever other reason, volume creation will be failed. A "strict-local" replica that becomes displaced from its workload will be marked as "Stopped".

> Global setting: [Default Data Locality](../settings#default-data-locality)
> More defails in [Data Locality](../../high-availability/data-locality).
> More details in [Data Locality](../../high-availability/data-locality).
#### Replica Auto-Balance *(field: `parameters.replicaAutoBalance`)*
> Default: `ignored`
Expand Down
58 changes: 3 additions & 55 deletions content/docs/1.5.2/spdk/quick-start.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
**Table of Contents**
- [Prerequisites](#prerequisites)
- [Configure Kernel Modules and Huge Pages](#configure-kernel-modules-and-huge-pages)
- [Install NVMe Userspace Tool and Load `nvme-tcp` Kernel Module](#install-nvme-userspace-tool-and-load-nvme-tcp-kernel-module)
- [Load `nvme-tcp` Kernel Module](#load-nvme-tcp-kernel-module)
- [Load Kernel Modules Automatically on Boot](#load-kernel-modules-automatically-on-boot)
- [Restart `kubelet`](#restart-kubelet)
- [Check Environment](#check-environment)
Expand Down Expand Up @@ -76,66 +76,14 @@ Or, you can install them manually by following these steps.
echo "vm.nr_hugepages=512" >> /etc/sysctl.conf
```

### Install NVMe Userspace Tool and Load `nvme-tcp` Kernel Module

> **NOTICE:**
>
> Make sure that the version of `nvme-cli` is equal to or greater than `1.12`.
>
> If the version of `nvme-cli` installed by the below steps is not equal to or greater than `1.12`., you will need to compile the utility from the [source codes](https://github.com/linux-nvme/nvme-cli) and install it on each Longhorn node by manual.
>
> Also, install the **uuid development library** before compiling to support the `show-hostnqn` subcommand.
>
> For SUSE/OpenSUSE you can install it use this command:
> ```
> zypper install uuid-devel
> ```
>
> For Debian and Ubuntu, use this command:
> ```
> apt install uuid-dev
> ```
>
> For RHEL, CentOS, and EKS with `EKS Kubernetes Worker AMI with AmazonLinux2 image`, use this command:
> ```
> yum install uuid-devel
> ```
>

And also can check the log with the following command to see the installation result
```
nvme-cli install successfully
```
### Load `nvme-tcp` Kernel Module

We provide a manifest that helps you finish the deployment on each Longhorn node.
```
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v{{< current-version >}}/deploy/prerequisite/longhorn-nvme-cli-installation.yaml
```

Or, you can manually install them.
- Install nvme-cli on each node and make sure that the version of `nvme-cli` is equal to or greater than `1.12`.

For SUSE/OpenSUSE you can install it use this command:
```
zypper install nvme-cli
```

For Debian and Ubuntu, use this command:
```
apt install nvme-cli
```

For RHEL, CentOS, and EKS with `EKS Kubernetes Worker AMI with AmazonLinux2 image`, use this command:
```
yum install nvme-cli
```

To check the version of nvme-cli, execute the following command.
```
nvme version
```

- Load `nvme-tcp` kernel module on the each Longhorn node
Or, you can manually load `nvme-tcp` kernel module on the each Longhorn node
```
modprobe nvme-tcp
```
Expand Down
2 changes: 1 addition & 1 deletion content/docs/1.6.0/deploy/important-notes/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ Please use the new setting [Node Drain Policy](../../references/settings#node-dr

### Instance Managers Consolidated

Engine instance mangers and replica instance managers has been consolidated. Previous engine/replica instance managers are now deprecated, but they will still provide service to the existing attached volumes.
Engine instance managers and replica instance managers has been consolidated. Previous engine/replica instance managers are now deprecated, but they will still provide service to the existing attached volumes.

The `Guaranteed Engine Manager CPU` and `Guaranteed Replica Manager CPU` settings are removed and replaced by `Guaranteed Instance Manager CPU`.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ If enabled, try to keep the data on the same node as the workload for better per
- If "strict-local" is not possible for whatever other reason, volume creation will be failed. A "strict-local" replica that becomes displaced from its workload will be marked as "Stopped".

> Global setting: [Default Data Locality](../settings#default-data-locality)
> More defails in [Data Locality](../../high-availability/data-locality).
> More details in [Data Locality](../../high-availability/data-locality).
#### Replica Auto-Balance *(field: `parameters.replicaAutoBalance`)*
> Default: `ignored`
Expand Down
58 changes: 3 additions & 55 deletions content/docs/1.6.0/spdk/quick-start.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
**Table of Contents**
- [Prerequisites](#prerequisites)
- [Configure Kernel Modules and Huge Pages](#configure-kernel-modules-and-huge-pages)
- [Install NVMe Userspace Tool and Load `nvme-tcp` Kernel Module](#install-nvme-userspace-tool-and-load-nvme-tcp-kernel-module)
- [Load `nvme-tcp` Kernel Module](#load-nvme-tcp-kernel-module)
- [Load Kernel Modules Automatically on Boot](#load-kernel-modules-automatically-on-boot)
- [Restart `kubelet`](#restart-kubelet)
- [Check Environment](#check-environment)
Expand Down Expand Up @@ -76,66 +76,14 @@ Or, you can install them manually by following these steps.
echo "vm.nr_hugepages=512" >> /etc/sysctl.conf
```

### Install NVMe Userspace Tool and Load `nvme-tcp` Kernel Module

> **NOTICE:**
>
> Make sure that the version of `nvme-cli` is equal to or greater than `1.12`.
>
> If the version of `nvme-cli` installed by the below steps is not equal to or greater than `1.12`., you will need to compile the utility from the [source codes](https://github.com/linux-nvme/nvme-cli) and install it on each Longhorn node by manual.
>
> Also, install the **uuid development library** before compiling to support the `show-hostnqn` subcommand.
>
> For SUSE/OpenSUSE you can install it use this command:
> ```
> zypper install uuid-devel
> ```
>
> For Debian and Ubuntu, use this command:
> ```
> apt install uuid-dev
> ```
>
> For RHEL, CentOS, and EKS with `EKS Kubernetes Worker AMI with AmazonLinux2 image`, use this command:
> ```
> yum install uuid-devel
> ```
>

And also can check the log with the following command to see the installation result
```
nvme-cli install successfully
```
### Load `nvme-tcp` Kernel Module

We provide a manifest that helps you finish the deployment on each Longhorn node.
```
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v{{< current-version >}}/deploy/prerequisite/longhorn-nvme-cli-installation.yaml
```

Or, you can manually install them.
- Install nvme-cli on each node and make sure that the version of `nvme-cli` is equal to or greater than `1.12`.

For SUSE/OpenSUSE you can install it use this command:
```
zypper install nvme-cli
```

For Debian and Ubuntu, use this command:
```
apt install nvme-cli
```

For RHEL, CentOS, and EKS with `EKS Kubernetes Worker AMI with AmazonLinux2 image`, use this command:
```
yum install nvme-cli
```

To check the version of nvme-cli, execute the following command.
```
nvme version
```

- Load `nvme-tcp` kernel module on the each Longhorn node
Or, you can manually load `nvme-tcp` kernel module on the each Longhorn node
```
modprobe nvme-tcp
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ Now set `Settings/General/BackupTarget` to
```
s3://backupbucket@us-east-1/
```
And `Setttings/General/BackupTargetSecret` to
And `Settings/General/BackupTargetSecret` to
```
minio-secret
```
Expand Down
2 changes: 1 addition & 1 deletion content/kb/space-consumption-guideline.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,5 +34,5 @@ In this case, the node is probably marked as NotReady due to the disk pressure.
To do recover nodes and disk, we would recommend directly removing some redundant replica directories for the full disk. Here redundant replicas means that the corresponding volumes have healthy replicas in other disks. Later on Longhorn will automatically rebuild new replicas in other disks if possible.
Besides, users may need to expand the existing disks or add more disks to avoid future disk exhaustion issues.

Notice that the disk exhausion may be caused by replicas being unevenly scheduled. Users can check [setting Replica Auto Balance](../../docs/1.5.1/high-availability/auto-balance-replicas) for this scenario.
Notice that the disk exhaustion may be caused by replicas being unevenly scheduled. Users can check [setting Replica Auto Balance](../../docs/1.5.1/high-availability/auto-balance-replicas) for this scenario.