-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
etcd controller shall not ignore ghost etcd member in health check #11231
Comments
It is normal for a node to be added to etcd before the kubelet comes up and creates a Kubernetes node object. Are you accounting for this in your error state? This should be handled by member promotion, and on the other side, the finalizer on node objects should prevent nodes from being deleted without being removed from etcd. Can you provide steps to reproduce this? |
Good point, CAPI may consider this as abonormal state, and hold any mgmt operation until cluster to be stable (all etcd member mapped to node)
It seems that the issue we were facing is due to stop VM before removing node, not etcd member leak. When we remove etcd first, and then stop VM and delete node, rollout seems to be ok. Well, we do hope there is way for CAPI to know the etcd member list as another security check, but currently etcd controller doesn't surface such info. Another idea is to expose the etcd member list as a node annotation, what do you think @brandond ? |
You should be able to delete the node from the cluster (both the kubernetes node, and the etcd member) even when the node is offline. It may take a bit longer if you're waiting for the etcd controller lease to transition to another node to trigger the cluster member remove and cleanup of the finalizer on the node object... but it should succeed. Can you provide steps to reproduce whatever problem you're running into here?
I'm not sure this makes sense to me. You'd have global etcd cluster state, stored as an annotation on individual node objects? If for some reason you need to check etcd cluster membership, I'd probably recommend just querying that directly from etcd (if you must), or via the supervisor API - which is available anonymously if accessed via localhost: Lines 701 to 703 in 917761c
root@systemd-node-1:/# curl -kS https://127.0.0.1:6443/db/info
{"members":[{"ID":4682922252190157995,"name":"systemd-node-1-103abf0d","peerURLs":["https://172.17.0.4:2380"],"clientURLs":["https://172.17.0.4:2379"]}]} |
steps:
problem is: If stop VM first, then cluster will lose quorum directly, and there is no chance for deleting node. That's the reason why we need delete etcd member first before stop VM.
Indeed, that seems not quite right to surface etcd cluster level info on node. |
You should never have a cluster with 2 etcd nodes. As you noted, loss of either node will cause a total cluster outage, and you'll need to reset the etcd cluster to a single node to recover. This is what the --cluster-reset flag is for.
You can either delete the node object, or add the pre-delete annotation to the node to remove it from the etcd cluster without deleting the node resource itself. This annotation is how Rancher prepares nodes for deletion. |
2 cp node cluster is the intermediate state when we test scale down from N cp nodes to 1 by capi.
Thanks, we are using etcd.k3s.cattle.io/remove annotation. What else does pre-deletion do than removing etcd member? |
That's all it does. If you're scaling down to 1, you might consider just running a cluster-reset instead of stepping through all the intermediate node counts one by one. |
Environmental Info:
K3s Version:
Node(s) CPU architecture, OS, and Version:
Cluster Configuration:
k3s cluster with embeded etcd cluster
Describe the bug:
Currently, when there is a ghost etcd member (the member which don't belong to any k3s node), etcd controller choose to ignore it.
k3s/pkg/etcd/etcd.go
Lines 1115 to 1117 in 4adcdf8
This could be dangerous. Suppose there is a 3-controlplane nodes k3s cluster with embeded etcd, and 1 ghost member which may already offline (may caused by unclean node removal), then the cluster is actually in unhealthy state and cannot bear another cp node offline.
We are working on k3s cluster-api provider, when we do rollout update on k3s cluster, we encountered issue that node not removed cleanly and have etcd member left, and later it causes quorum loss in the middle of rollout.
Although it can be solved by leveraging annotation
etcd.k3s.cattle.io/remove
, it's better to have a 2nd guard to ensure no ghost etcd member eixists before doing any cluster managment operation (capi default controlplane implementation have this check). See previous discussion #9841Steps To Reproduce:
kubectl delete node
Expected behavior:
node EtcdIsVoter condition shall be false with message showing the ghost member
Actual behavior:
node EtcdIsVoter condition shows as true
Proposed fix
PullRequest #11232
Additional context / logs:
The text was updated successfully, but these errors were encountered: