Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updates node_pool to use kubeconfig on control plane node to delete n… #90

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

jmarhee
Copy link
Contributor

@jmarhee jmarhee commented May 25, 2021

…ode from cluster upon destroy

Signed-off-by: Joseph D. Marhee [email protected]

This is meant to address #79.

The flow here is that it logs into the control plane and runs kubectl locally to remove the node in question. Because variables like controller_address and the ssh key path cannot be accessed via self, I stored this information in custom_data. My goal was to make the process self-contained to Terraform rather than having a user supply a kubeconfig path, and this be run locally, etc. but that is turning out to be potentially inefficient, and may break if there's a connectivity issue, whatever.

If I, instead, assume the user has a kubeconfig, then I can skip the entire control plane part, and just store the kubeconfig path from the user in that field, which is the method #79 seemed to be suggesting.

Joseph D. Marhee added 2 commits May 25, 2021 16:28
@jmarhee jmarhee marked this pull request as draft May 25, 2021 21:46
@jmarhee
Copy link
Contributor Author

jmarhee commented May 25, 2021

I didn't realize I couldn't re-extract the custom_data value the way I thought I'd be able to (not an attribute). I am, instead, going to make an assumption about the user that they do have a kubeconfig on-hand, and that it is active, at destroy time, and make it non-fatal for the time-being if it is unsuccessful for whatever reason (best-effort), so I can at least test this behavior properly (and if they do not, it'll behave as it does now and not run any kubectl operations).

Possibly return the kubeconfig to the user when it's created, so this is kind of a cue to have it on-hand.

Joseph D. Marhee added 3 commits May 25, 2021 22:34
…local kubeconfig, rather than remote.

Signed-off-by: Joseph D. Marhee <[email protected]>
@jmarhee
Copy link
Contributor Author

jmarhee commented May 26, 2021

I ran this successfully in, both, scaling a pool, and targetting the pool module. The behavior is basically such that it attempts to drain and cordon, provides the administrative step if unsuccessful, and completes destruction as before. This part could use a little refinement, but works as described.

@jmarhee jmarhee requested a review from displague May 26, 2021 07:04
@jmarhee jmarhee marked this pull request as ready for review May 26, 2021 07:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant