Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stale NBs are left when deleting the underlying K8S cluster #140

Open
2 tasks done
julsemaan opened this issue Nov 8, 2023 · 1 comment
Open
2 tasks done

Stale NBs are left when deleting the underlying K8S cluster #140

julsemaan opened this issue Nov 8, 2023 · 1 comment

Comments

@julsemaan
Copy link
Contributor

General:

  • Have you removed all sensitive information, including but not limited to access keys and passwords?
  • Have you checked to ensure there aren't other open or closed Pull Requests for the same bug/feature/question?

Bug Reporting

When using the Linode CCM in K8S clusters and deleting the underlying cluster while not deleting the Service leaves a stable NB in the Linode account.

Expected Behavior

Explore having some kind of finalization mechanism to delete the NB when the cluster is being deleted

Actual Behavior

Stale NBs are left in place

Steps to Reproduce the Problem

  1. Create a K8S cluster with the linode-ccm installed on it
  2. Provision a Service of type LoadBalancer
  3. Delete the control plane and nodes of the cluster

Environment Specifications

This happens for KPP clusters but also was reported by externally facing customers who are not using KPP

Additional Notes

I'm not really sure we can address this in the linode-ccm and may need to address it at a platform level but wanted to have this logged and tracked somewhere


For general help or discussion, join the Kubernetes Slack team channel #linode. To sign up, use the Kubernetes Slack inviter.

The Linode Community is a great place to get additional support.

@luthermonson
Copy link
Contributor

any thoughts on what we could catch to make this happen? this is happening outside of kube so we can't use the lifecycles in the ccm... my only thought is if the vms are deleted via the linode api we'd have do some reconcile with nodebalancers e.g. if their underlying nodes dont exist any more we delete the nodebalancer. open to suggestions @schinmai-akamai any other ideas ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants