You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Logs show nothing unusual, but kubectl describe says:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
2h 36s 64 kubelet, gke-eu-west-3-default-pool-c36cf3c6-nfgg Warning FailedMount Unable to mount volumes for pod "pzoo-0_kafka(bd1a380f-5a60-11e7-93e7-42010a84002d)": timeout expired waiting for volumes to attach/mount for pod "kafka"/"pzoo-0". list of unattached/unmounted volumes=[data]
2h 36s 64 kubelet, gke-eu-west-3-default-pool-c36cf3c6-nfgg Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "kafka"/"pzoo-0". list of unattached/unmounted volumes=[data]
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
2h 24s 65 kubelet, gke-eu-west-3-default-pool-c36cf3c6-nfgg Warning FailedMount Unable to mount volumes for pod "kafka-2_kafka(247ecd2f-5afa-11e7-93e7-42010a84002d)": timeout expired waiting for volumes to attach/mount for pod "kafka"/"kafka-2". list of unattached/unmounted volumes=[data]
2h 24s 65 kubelet, gke-eu-west-3-default-pool-c36cf3c6-nfgg Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "kafka"/"kafka-2". list of unattached/unmounted volumes=[data]
Nodes where at 50-60 percent memory usage so that isn't the problem.
After some time I saw kubectl delete -f resulting in error: error when stopping "zookeeper/50pzoo.yml": timed out waiting for "pzoo" to be synced.
Interestingly I got no effect whatsoever from deletion of the statefulsets, nor from scale down, while pods were in this state. Just showed "0 desired | 3 total".
kubectl delete pod caused proper termination of healthy pod.
While working on #35 I suddenly had a depressing status report:
Logs show nothing unusual, but kubectl describe says:
All PVCs are bound.
Persistence setup was changed in #33.
The text was updated successfully, but these errors were encountered: