-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix aggressive lvRemove and vgRemove #418
Conversation
The default storage profile was only using the $VG_NAME, causing lvRemove to remove all logical volumes in the volume group. Additionally, volume groups were being removed without first checking to see if logical volumes exist in the volume group. While both of these issues have no real affect on the removal of the LVs/VGs, this was causing issues with the new PreUnmount commands. The first logical volume to run the lvRemove command was the only fileysystem to run the PreUnmount commands, since it destroys all the other filesystems on the rabbit in that single volume group. With this change, the PreUnmmount command runs on each filesystem before it is destroyed. Signed-off-by: Blake Devcich <[email protected]>
pkg/blockdevice/lvm/lvs.go
Outdated
@@ -43,7 +43,7 @@ type lvsLogicalVolume struct { | |||
Size string `json:"lv_size"` | |||
} | |||
|
|||
func lvsListVolumes(ctx context.Context, log logr.Logger) ([]lvsLogicalVolume, error) { | |||
func LvsListVolumes(ctx context.Context, log logr.Logger) ([]lvsLogicalVolume, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had to make this public so lvm.go
could use it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could also make a receiver function for the VolumeGroup (LogicalVolumeCount() or something) so you don't have to make this public. Not a big deal though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's way better/cleaner. I'll do that.
pkg/blockdevice/lvm.go
Outdated
} | ||
for _, lv := range lvs { | ||
if lv.VGName == l.VolumeGroup.Name { | ||
return false, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be true since it destroyed the LV?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To me, hitting this line means that there is another LV that is still using the VG. This LV is now gone, but there are still others.
Do I have that wrong?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's right. But because we removed the block device associated with this allocation (the logical volume), I think we should return true here. The last allocation to remove the LV will clean up the VG and PVs as well, but I don't think the caller of this function should care about that distinction.
I didn't verify, but probably everywhere Destroy() is called, the ran
return value probably just controls whether a log message is printed. So either way the stakes are low :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I see what you're saying now. I'll change that.
This got me thinking though: if we remove all the LVs and then somehow on the last one we don't remove the VG because lvs
is stale or something. A. is that possible and B. are we going to leave around a VG in that scenario?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can trust lvs
. Even if something weird does happen, we'll still delete the NVMe namespaces, so we won't really be stranding any space. LVM will be a little confused because the block devices got torn down underneath it, but even that will eventually get cleaned up.
The default storage profile was only using the $VG_NAME, causing
lvRemove to remove all logical volumes in the volume group.
Additionally, volume groups were being removed without first checking to
see if logical volumes exist in the volume group.
While both of these issues have no real affect on the removal of the
LVs/VGs, this was causing issues with the new PreUnmount commands. The
first logical volume to run the lvRemove command was the only
fileysystem to run the PreUnmount commands, since it destroys all the
other filesystems on the rabbit in that single volume group.
With this change, the PreUnmount command runs on each filesystem before
it is destroyed.
Signed-off-by: Blake Devcich [email protected]