-
Notifications
You must be signed in to change notification settings - Fork 238
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(reset): add cleanup volumes and cleanup-load-balancers flag #3507
base: main
Are you sure you want to change the base?
Conversation
c7a1d0b
to
906bca7
Compare
/assign @xmudrii |
906bca7
to
576aa4a
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Signed-off-by: rajaSahil <[email protected]>
576aa4a
to
7c32a02
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did some initial review and testing, I'll have some more comments, but let's start with this and then we can follow up on some other stuff.
@@ -0,0 +1,71 @@ | |||
/* | |||
Copyright 2020 The KubeOne Authors. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please replace 2020 with 2024 2025 in all files that you added in this PR. :)
|
||
package clientutil | ||
|
||
import ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Take a look here how we order imports:
kubeone/pkg/nodeutils/drain.go
Lines 19 to 31 in 8b8d794
import ( | |
"context" | |
"github.com/sirupsen/logrus" | |
"k8c.io/kubeone/pkg/fail" | |
corev1 "k8s.io/api/core/v1" | |
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" | |
"k8s.io/client-go/kubernetes" | |
"k8s.io/client-go/rest" | |
"k8s.io/kubectl/pkg/drain" | |
) |
It should be stdlib -> external dependencies -> k8c.io -> k8s.io
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will fix all the imports
go run go.xrstf.de/gimps@latest .
func CleanupLBs(ctx context.Context, logger logrus.FieldLogger, c client.Client) error { | ||
serviceList := &corev1.ServiceList{} | ||
if err := c.List(ctx, serviceList); err != nil { | ||
return fail.KubeClient(err, "failed to list Service.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple of things:
- we don't use uppercase letters and capitalization in error messages
- we use gerund form of operation that we do for error messages, e.g.
listing services
instead offailed to list services
return fail.KubeClient(err, "failed to list Service.") | |
return fail.KubeClient(err, "listing services") |
} | ||
// Only LoadBalancer services incur charges on cloud providers | ||
if service.Spec.Type == corev1.ServiceTypeLoadBalancer { | ||
logger.Infof("Deleting SVC : %s/%s\n", service.Namespace, service.Name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's do this instead (because this might spam logs quite a lot):
- Have a message before getting into the loop such as
Cleaning up LoadBalancer Services...
- Have this as a debug level message, e.g.:
logger.Infof("Deleting SVC : %s/%s\n", service.Namespace, service.Name) | |
logger.Debugf("Deleting LoadBalancer Service \"%s/%s\"", service.Namespace, service.Name) |
} | ||
|
||
func WaitCleanupLbs(ctx context.Context, logger logrus.FieldLogger, c client.Client) error { | ||
logger.Infoln("Waiting for all load balancer services to get deleted...") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logger.Infoln("Waiting for all load balancer services to get deleted...") | |
logger.Infoln("Waiting for all LoadBalancer Services to get deleted...") |
CleanupVolumes bool | ||
CleanupLoadBalancers bool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you rename flags, make sure to remove this as well.
if !s.CleanupLoadBalancers { | ||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should make this task conditional instead, see an example here
Line 603 in 8b8d794
Predicate: func(s *state.State) bool { return s.Cluster.CloudProvider.Openstack != nil }, |
return nil | ||
} | ||
var lastErr error | ||
s.Logger.Infoln("Deleting load balancer services...") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s.Logger.Infoln("Deleting load balancer services...") | |
s.Logger.Infoln("Deleting LoadBalancer Services...") |
} | ||
var lastErr error | ||
s.Logger.Infoln("Deleting load balancer services...") | ||
_ = wait.ExponentialBackoff(defaultRetryBackoff(3), func() (bool, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need exponential backoff here I think, each task already has 3 times backoff.
return true, nil | ||
}) | ||
|
||
_ = wait.ExponentialBackoff(defaultRetryBackoff(3), func() (bool, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here, this shouldn't be needed.
What this PR does / why we need it:
dynamically provisioned and unretained volumes
andload balancers services
from the cluster before resetting.Which issue(s) this PR fixes:
Fixes #3394
What type of PR is this?
/kind feature
Does this PR introduce a user-facing change? Then add your Release Note here:
Documentation: