Skip to content

Commit

Permalink
Translate edits to English
Browse files Browse the repository at this point in the history
Signed-off-by: Max Chervov <[email protected]>
  • Loading branch information
maxinsky committed Jan 21, 2025
1 parent 94dc446 commit 15c0b46
Show file tree
Hide file tree
Showing 3 changed files with 641 additions and 950 deletions.
56 changes: 30 additions & 26 deletions docs/FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,52 +9,56 @@ The module is guaranteed to work only with stock kernels that are shipped with t
The module may work with other kernels or distributions, but its stable operation and availability of all features is not guaranteed.
{{< /alert >}}

## Why does creating `BlockDevice` and `LVMVolumeGroup` resources in a cluster fail?
## Why does creating BlockDevice and LVMVolumeGroup resources in a cluster fail?

* In most cases, the creation of `BlockDevice` resources fails because the existing devices fail filtering by the controller. Please make sure that your devices meet the [requirements](./usage.html#the-conditions-the-controller-imposes-on-the-device).
- In most cases, the creation of BlockDevice resources fails because the existing devices fail filtering by the controller. Make sure that your devices meet the [requirements](./usage.html#the-conditions-the-controller-imposes-on-the-device).

* Creating LVMVolumeGroup resources may fail due to the absence of BlockDevice resources in the cluster, as their names are used in the LVMVolumeGroup specification.
- Creating LVMVolumeGroup resources may fail due to the absence of BlockDevice resources in the cluster, as their names are used in the LVMVolumeGroup specification.

* If the `BlockDevice` resources are present and the `LVMVolumeGroup` resources are not present, please make sure the existing `LVM Volume Group` on the node has a special tag `storage.deckhouse.io/enabled=true` attached.
- If the BlockDevice resources are present and the LVMVolumeGroup resources are not, make sure the existing `LVM Volume Group` on the node has the special tag `storage.deckhouse.io/enabled=true` attached.

## I have deleted the `LVMVolumeGroup` resource, but the resource and its `Volume Group` are still there. What do I do?
## I have deleted the LVMVolumeGroup resource, but the resource and its `Volume Group` are still there. What do I do?

Such a situation is possible in two cases:

1. The `Volume Group` contains `LV`.
The controller does not take responsibility for removing LV from the node, so if there are any logical volumes in the `Volume Group` created by the resource, you need to manually delete them on the node. After this, both the resource and the `Volume Group` (along with the `PV`) will be deleted automatically.

The controller does not take responsibility for removing LV from the node, so if there are any logical volumes in the `Volume Group` created by the resource, you need to manually delete them on the node. After this, both the resource and the `Volume Group` (along with the `PV`) will be deleted automatically.

2. The resource has an annotation `storage.deckhouse.io/deletion-protection`.
This annotation protects the resource from deletion and, as a result, the `Volume Group` created by it. You need to remove the annotation manually with the command:
```shell
kubectl annotate lvg %lvg-name% storage.deckhouse.io/deletion-protection-
```

After the command's execution, both the `LVMVolumeGroup` resource and `Volume Group` will be deleted automatically.
This annotation protects the resource from deletion and, as a result, the `Volume Group` created by it. You need to remove the annotation manually with the command:

## I'm trying to create a `Volume Group` using the `LVMVolumeGroup` resource, but I'm not getting anywhere. Why?
```shell
kubectl annotate lvg %lvg-name% storage.deckhouse.io/deletion-protection-
```

After the command is executed, both the LVMVolumeGroup resource and `Volume Group` will be deleted automatically.

## I'm trying to create a `Volume Group` using the LVMVolumeGroup resource, but I'm not getting anywhere. Why?

Most likely, your resource fails controller validation even if it has passed the Kubernetes validation successfully.
The exact cause of the failure can be found in the `status.message` field of the resource itself.
The exact cause of the failure can be found in the `status.message` field of the resource.
You can also refer to the controller's logs.

The problem usually stems from incorrectly defined `BlockDevice` resources. Please make sure these resources meet the following requirements:
The problem usually stems from incorrectly-defined BlockDevice resources. Make sure these resources meet the following requirements:

- The `Consumable` field is set to `true`.
- For a `Volume Group` of the `Local` type, the specified `BlockDevice` belong to the same node.<!-- > - For a `Volume Group` of the `Shared` type, the specified `BlockDevice` is the only resource. -->
- The current names of the `BlockDevice` resources are specified.
- For a `Volume Group` of the `Local` type, the specified BlockDevice resources belong to the same node.<!-- > - For a `Volume Group` of the `Shared` type, the specified BlockDevice is the only resource. -->
- The current names of the BlockDevice resources are specified.

The full list of expected values can be found in the [CR reference](./cr.html) of the `LVMVolumeGroup` resource.
A full list of expected values can be found in the [CR reference](./cr.html) of the LVMVolumeGroup resource.

## What happens if I unplug one of the devices in a `Volume Group`? Will the linked `LVMVolumeGroup` resource be deleted?
## What happens if I unplug one of the devices in a `Volume Group`? Will the linked LVMVolumeGroup resource be deleted?

The `LVMVolumeGroup` resource will persist as long as the corresponding `Volume Group` exists. As long as at least one device exists, the `Volume Group` will be there, albeit in an unhealthy state.
The LVMVolumeGroup resource will persist as long as the corresponding `Volume Group` exists. As long as at least one device exists, the `Volume Group` will be there, albeit in an unhealthy state.
Note that these issues will be reflected in the resource's `status`.

Once the unplugged device is plugged back in and reactivated, the `LVM Volume Group` will regain its functionality while the corresponding `LVMVolumeGroup` resource will also be updated to reflect the current state.
Once the unplugged device is plugged back in and reactivated, the `LVM Volume Group` will regain its functionality while the corresponding LVMVolumeGroup resource will also be updated to reflect the current state.

## How to transfer control of an existing `LVM Volume Group` on the node to the controller?

Simply add the LVM tag `storage.deckhouse.io/enabled=true` to the LVM Volume Group on the node:
Add the LVM tag `storage.deckhouse.io/enabled=true` to the LVM Volume Group on the node:

```shell
vgchange myvg-0 --addtag storage.deckhouse.io/enabled=true
Expand All @@ -68,19 +72,19 @@ Delete the `storage.deckhouse.io/enabled=true` LVM tag for the target `Volume Gr
vgchange myvg-0 --deltag storage.deckhouse.io/enabled=true
```

The controller will then stop tracking the selected `Volume Group` and delete the associated `LVMVolumeGroup` resource automatically.
The controller will then stop tracking the selected `Volume Group` and delete the associated LVMVolumeGroup resource automatically.

## I haven't added the `storage.deckhouse.io/enabled=true` LVM tag to the `Volume Group`, but it is there. How is this possible?

This can happen if you created the `LVM Volume Group` using the `LVMVolumeGroup` resource, in which case the controller will automatically add this LVM tag to the created `LVM Volume Group`. This is also possible if the `Volume Group` or its `Thin-pool` already had the `linstor-*` LVM tag of the `linstor` module.
This can happen if you created the `LVM Volume Group` using the LVMVolumeGroup resource, in which case the controller will automatically add this LVM tag to the created `LVM Volume Group`. This is also possible if the `Volume Group` or its `Thin-pool` already had the `linstor-*` LVM tag of the `linstor` module.

When you switch from the `linstor` module to the `sds-node-configurator` and `sds-drbd` modules, the `linstor-*` LVM tags are automatically replaced with the `storage.deckhouse.io/enabled=true` LVM tag in the `Volume Group`. This way, the `sds-node-configurator` gains control over these `Volume Groups`.

## How to use the `LVMVolumeGroupSet` resource to create `LVMVolumeGroup`?
## How to use the LVMVolumeGroupSet resource to create LVMVolumeGroup?

To create an `LVMVolumeGroup` using the `LVMVolumeGroupSet` resource, you need to specify node selectors and a template for the `LVMVolumeGroup` resources in the `LVMVolumeGroupSet` specification. Currently, only the `PerNode` strategy is supported. With this strategy, the controller will create one `LVMVolumeGroup` resource from the template for each node that matches the selector.
To create an LVMVolumeGroup using the LVMVolumeGroupSet resource, you need to specify node selectors and a template for the LVMVolumeGroup resources in the LVMVolumeGroupSet specification. Currently, only the `PerNode` strategy is supported. With this strategy, the controller will create one LVMVolumeGroup resource from the template for each node that matches the selector.

Example of an `LVMVolumeGroupSet` specification:
Example of an LVMVolumeGroupSet specification:

```yaml
apiVersion: storage.deckhouse.io/v1alpha1
Expand Down
Loading

0 comments on commit 15c0b46

Please sign in to comment.