Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

etcd controller shall not ignore ghost etcd member in health check #11231

Open
mogliang opened this issue Nov 5, 2024 · 7 comments
Open

etcd controller shall not ignore ghost etcd member in health check #11231

mogliang opened this issue Nov 5, 2024 · 7 comments

Comments

@mogliang
Copy link

mogliang commented Nov 5, 2024

Environmental Info:
K3s Version:

Node(s) CPU architecture, OS, and Version:

Cluster Configuration:
k3s cluster with embeded etcd cluster

Describe the bug:
Currently, when there is a ghost etcd member (the member which don't belong to any k3s node), etcd controller choose to ignore it.

k3s/pkg/etcd/etcd.go

Lines 1115 to 1117 in 4adcdf8

if member.Name != name {
continue
}

This could be dangerous. Suppose there is a 3-controlplane nodes k3s cluster with embeded etcd, and 1 ghost member which may already offline (may caused by unclean node removal), then the cluster is actually in unhealthy state and cannot bear another cp node offline.

We are working on k3s cluster-api provider, when we do rollout update on k3s cluster, we encountered issue that node not removed cleanly and have etcd member left, and later it causes quorum loss in the middle of rollout.

Although it can be solved by leveraging annotation etcd.k3s.cattle.io/remove, it's better to have a 2nd guard to ensure no ghost etcd member eixists before doing any cluster managment operation (capi default controlplane implementation have this check). See previous discussion #9841

Steps To Reproduce:

  1. create 3 controlplane node k3s cluster
  2. shutdown 1 controlplane machine and delete that node by kubectl delete node

Expected behavior:
node EtcdIsVoter condition shall be false with message showing the ghost member

Actual behavior:
node EtcdIsVoter condition shows as true

Proposed fix
PullRequest #11232

Additional context / logs:

@brandond
Copy link
Member

brandond commented Nov 5, 2024

It is normal for a node to be added to etcd before the kubelet comes up and creates a Kubernetes node object. Are you accounting for this in your error state? This should be handled by member promotion, and on the other side, the finalizer on node objects should prevent nodes from being deleted without being removed from etcd.

Can you provide steps to reproduce this?

@mogliang
Copy link
Author

mogliang commented Nov 5, 2024

It is normal for a node to be added to etcd before the kubelet comes up and creates a Kubernetes node object. Are you accounting for this in your error state?

Good point, CAPI may consider this as abonormal state, and hold any mgmt operation until cluster to be stable (all etcd member mapped to node)

This should be handled by member promotion, and on the other side, the finalizer on node objects should prevent nodes from being deleted without being removed from etcd.
Can you provide steps to reproduce this?

It seems that the issue we were facing is due to stop VM before removing node, not etcd member leak. When we remove etcd first, and then stop VM and delete node, rollout seems to be ok.

Well, we do hope there is way for CAPI to know the etcd member list as another security check, but currently etcd controller doesn't surface such info.

Another idea is to expose the etcd member list as a node annotation, what do you think @brandond ?

@brandond
Copy link
Member

brandond commented Nov 5, 2024

It seems that the issue we were facing is due to stop VM before removing node, not etcd member leak. When we remove etcd first, and then stop VM and delete node, rollout seems to be ok.

You should be able to delete the node from the cluster (both the kubernetes node, and the etcd member) even when the node is offline. It may take a bit longer if you're waiting for the etcd controller lease to transition to another node to trigger the cluster member remove and cleanup of the finalizer on the node object... but it should succeed. Can you provide steps to reproduce whatever problem you're running into here?

Another idea is to expose the etcd member list as a node annotation

I'm not sure this makes sense to me. You'd have global etcd cluster state, stored as an annotation on individual node objects? If for some reason you need to check etcd cluster membership, I'd probably recommend just querying that directly from etcd (if you must), or via the supervisor API - which is available anonymously if accessed via localhost:

k3s/pkg/etcd/etcd.go

Lines 701 to 703 in 917761c

ir := r.Path("/db/info").Subrouter()
ir.Use(auth.IsLocalOrHasRole(e.config, version.Program+":server"))
ir.Handle("", e.infoHandler())

root@systemd-node-1:/# curl -kS https://127.0.0.1:6443/db/info
{"members":[{"ID":4682922252190157995,"name":"systemd-node-1-103abf0d","peerURLs":["https://172.17.0.4:2380"],"clientURLs":["https://172.17.0.4:2379"]}]}

@mogliang
Copy link
Author

mogliang commented Nov 6, 2024

You should be able to delete the node from the cluster (both the kubernetes node, and the etcd member) even when the node is offline. It may take a bit longer if you're waiting for the etcd controller lease to transition to another node to trigger the cluster member remove and cleanup of the finalizer on the node object... but it should succeed. Can you provide steps to reproduce whatever problem you're running into here?

steps:

  1. create a 2 cp nodes cluster.
  2. stop 1 VM
  3. delete node

problem is: If stop VM first, then cluster will lose quorum directly, and there is no chance for deleting node. That's the reason why we need delete etcd member first before stop VM.
But if there is ghost member exists for some reason, then delete etcd member first may not be safe as well. However, i'm not able to reproduce the condtion which has ghost member now.

I'm not sure this makes sense to me. You'd have global etcd cluster state, stored as an annotation on individual node objects? If for some reason you need to check etcd cluster membership, I'd probably recommend just querying that directly from etcd (if you must), or via the supervisor API - which is available anonymously if accessed via localhost:

Indeed, that seems not quite right to surface etcd cluster level info on node.
Access etcd directly is the solution we take right now for capi k3s provider, see ref k3s-io/cluster-api-k3s#75, but this is also not an ideal way. it requires manage etcd ca cert on mgmt cluster, and creating etcd proxy pod on target cluster, which is too heavy, besides, it bypasses k3s etcd controller to manage etcd cluster, that seems not quite right from design perspective.
Well, if etcd controller can expose etcd membership info, we shall be able to avoid the etcd direct access from capi mgmt cluster.

@brandond
Copy link
Member

brandond commented Nov 6, 2024

create a 2 cp nodes cluster

You should never have a cluster with 2 etcd nodes. As you noted, loss of either node will cause a total cluster outage, and you'll need to reset the etcd cluster to a single node to recover. This is what the --cluster-reset flag is for.

that's the reason why we need delete etcd member first before stop VM.

You can either delete the node object, or add the pre-delete annotation to the node to remove it from the etcd cluster without deleting the node resource itself. This annotation is how Rancher prepares nodes for deletion.

@mogliang
Copy link
Author

mogliang commented Nov 7, 2024

You should never have a cluster with 2 etcd nodes. As you noted, loss of either node will cause a total cluster outage, and you'll need to reset the etcd cluster to a single node to recover. This is what the --cluster-reset flag is for.

2 cp node cluster is the intermediate state when we test scale down from N cp nodes to 1 by capi.

You can either delete the node object, or add the pre-delete annotation to the node to remove it from the etcd cluster without deleting the node resource itself. This annotation is how Rancher prepares nodes for deletion.

Thanks, we are using etcd.k3s.cattle.io/remove annotation. What else does pre-deletion do than removing etcd member?

@brandond
Copy link
Member

brandond commented Nov 7, 2024

That's all it does.

If you're scaling down to 1, you might consider just running a cluster-reset instead of stepping through all the intermediate node counts one by one.

@caroline-suse-rancher caroline-suse-rancher moved this from New to In Triage in K3s Development Nov 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: In Triage
Development

No branches or pull requests

2 participants