-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix aggressive lvRemove and vgRemove #418
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -43,7 +43,7 @@ type lvsLogicalVolume struct { | |
Size string `json:"lv_size"` | ||
} | ||
|
||
func lvsListVolumes(ctx context.Context, log logr.Logger) ([]lvsLogicalVolume, error) { | ||
func LvsListVolumes(ctx context.Context, log logr.Logger) ([]lvsLogicalVolume, error) { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I had to make this public so There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You could also make a receiver function for the VolumeGroup (LogicalVolumeCount() or something) so you don't have to make this public. Not a big deal though. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That's way better/cleaner. I'll do that. |
||
output, err := command.Run("lvs --nolock --reportformat json", log) | ||
if err != nil { | ||
return nil, fmt.Errorf("could not list logical volumes: %w", err) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be true since it destroyed the LV?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To me, hitting this line means that there is another LV that is still using the VG. This LV is now gone, but there are still others.
Do I have that wrong?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's right. But because we removed the block device associated with this allocation (the logical volume), I think we should return true here. The last allocation to remove the LV will clean up the VG and PVs as well, but I don't think the caller of this function should care about that distinction.
I didn't verify, but probably everywhere Destroy() is called, the
ran
return value probably just controls whether a log message is printed. So either way the stakes are low :)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I see what you're saying now. I'll change that.
This got me thinking though: if we remove all the LVs and then somehow on the last one we don't remove the VG because
lvs
is stale or something. A. is that possible and B. are we going to leave around a VG in that scenario?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can trust
lvs
. Even if something weird does happen, we'll still delete the NVMe namespaces, so we won't really be stranding any space. LVM will be a little confused because the block devices got torn down underneath it, but even that will eventually get cleaned up.