Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix aggressive lvRemove and vgRemove #418

Merged
merged 2 commits into from
Nov 20, 2024
Merged

Fix aggressive lvRemove and vgRemove #418

merged 2 commits into from
Nov 20, 2024

Conversation

bdevcich
Copy link
Contributor

The default storage profile was only using the $VG_NAME, causing
lvRemove to remove all logical volumes in the volume group.
Additionally, volume groups were being removed without first checking to
see if logical volumes exist in the volume group.

While both of these issues have no real affect on the removal of the
LVs/VGs, this was causing issues with the new PreUnmount commands. The
first logical volume to run the lvRemove command was the only
fileysystem to run the PreUnmount commands, since it destroys all the
other filesystems on the rabbit in that single volume group.

With this change, the PreUnmount command runs on each filesystem before
it is destroyed.

Signed-off-by: Blake Devcich [email protected]

The default storage profile was only using the $VG_NAME, causing
lvRemove to remove all logical volumes in the volume group.
Additionally, volume groups were being removed without first checking to
see if logical volumes exist in the volume group.

While both of these issues have no real affect on the removal of the
LVs/VGs, this was causing issues with the new PreUnmount commands. The
first logical volume to run the lvRemove command was the only
fileysystem to run the PreUnmount commands, since it destroys all the
other filesystems on the rabbit in that single volume group.

With this change, the PreUnmmount command runs on each filesystem before
it is destroyed.

Signed-off-by: Blake Devcich <[email protected]>
@@ -43,7 +43,7 @@ type lvsLogicalVolume struct {
Size string `json:"lv_size"`
}

func lvsListVolumes(ctx context.Context, log logr.Logger) ([]lvsLogicalVolume, error) {
func LvsListVolumes(ctx context.Context, log logr.Logger) ([]lvsLogicalVolume, error) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had to make this public so lvm.go could use it

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could also make a receiver function for the VolumeGroup (LogicalVolumeCount() or something) so you don't have to make this public. Not a big deal though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's way better/cleaner. I'll do that.

}
for _, lv := range lvs {
if lv.VGName == l.VolumeGroup.Name {
return false, nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be true since it destroyed the LV?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To me, hitting this line means that there is another LV that is still using the VG. This LV is now gone, but there are still others.

Do I have that wrong?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's right. But because we removed the block device associated with this allocation (the logical volume), I think we should return true here. The last allocation to remove the LV will clean up the VG and PVs as well, but I don't think the caller of this function should care about that distinction.

I didn't verify, but probably everywhere Destroy() is called, the ran return value probably just controls whether a log message is printed. So either way the stakes are low :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see what you're saying now. I'll change that.

This got me thinking though: if we remove all the LVs and then somehow on the last one we don't remove the VG because lvs is stale or something. A. is that possible and B. are we going to leave around a VG in that scenario?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can trust lvs. Even if something weird does happen, we'll still delete the NVMe namespaces, so we won't really be stranding any space. LVM will be a little confused because the block devices got torn down underneath it, but even that will eventually get cleaned up.

@bdevcich bdevcich merged commit 5fbad90 into master Nov 20, 2024
3 checks passed
@bdevcich bdevcich deleted the vgdestroy-fix branch November 20, 2024 22:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants