Updates node_pool to use kubeconfig on control plane node to delete n… #90
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
…ode from cluster upon destroy
Signed-off-by: Joseph D. Marhee [email protected]
This is meant to address #79.
The flow here is that it logs into the control plane and runs kubectl locally to remove the node in question. Because variables like controller_address and the ssh key path cannot be accessed via
self
, I stored this information incustom_data
. My goal was to make the process self-contained to Terraform rather than having a user supply a kubeconfig path, and this be run locally, etc. but that is turning out to be potentially inefficient, and may break if there's a connectivity issue, whatever.If I, instead, assume the user has a kubeconfig, then I can skip the entire control plane part, and just store the kubeconfig path from the user in that field, which is the method #79 seemed to be suggesting.