Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CPEM 3.8.0 sometimes fails to start up properly #525

Open
tenyo opened this issue Mar 21, 2024 · 3 comments
Open

CPEM 3.8.0 sometimes fails to start up properly #525

tenyo opened this issue Mar 21, 2024 · 3 comments
Labels
triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@tenyo
Copy link

tenyo commented Mar 21, 2024

We've noticed a problem after switching to the latest CPEM 3.8.0 that sometimes it won't start up properly on new cluster nodes.
We just see these messages in the log over and over:

I0321 16:21:09.914082       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-controller-8rfzk with cloud provider
E0321 16:21:10.628988       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-controller-8rfzk': failed to get instance metadata for node stage-da-vmaas-vede716-controller-8rfzk: instance not found, requeuing
I0321 16:21:10.629012       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-worker-9fc8z with cloud provider
E0321 16:21:10.641638       1 node_controller.go:258] Error getting instance metadata for node addresses: instance not found
E0321 16:21:11.124676       1 node_controller.go:258] Error getting instance metadata for node addresses: instance not found
E0321 16:21:11.201226       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-worker-9fc8z': failed to get instance metadata for node stage-da-vmaas-vede716-worker-9fc8z: instance not found, requeuing

If I change the version in the cloud-provider-equinix-metal ds to 3.7.0 it starts up fine. Also, if I then change the version back to 3.8.0 it also starts up successfully.

I haven't been able to reproduce this reliably or tie it to a specific configuration, it just seems to happen sporadically.

PACKNGO_DEBUG=1 doesn't seem to work but here's the output from the logs with --v=5:

$ sudo kubectl -n kube-system logs -f cloud-provider-equinix-metal-cz4zw
I0321 16:23:33.204041       1 flags.go:64] FLAG: --allocate-node-cidrs="false"
I0321 16:23:33.204089       1 flags.go:64] FLAG: --allow-untagged-cloud="false"
I0321 16:23:33.204094       1 flags.go:64] FLAG: --authentication-kubeconfig=""
I0321 16:23:33.204099       1 flags.go:64] FLAG: --authentication-skip-lookup="true"
I0321 16:23:33.204103       1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s"
I0321 16:23:33.204107       1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="false"
I0321 16:23:33.204110       1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]"
I0321 16:23:33.204118       1 flags.go:64] FLAG: --authorization-kubeconfig=""
I0321 16:23:33.204121       1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s"
I0321 16:23:33.204124       1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s"
I0321 16:23:33.204127       1 flags.go:64] FLAG: --bind-address="0.0.0.0"
I0321 16:23:33.204132       1 flags.go:64] FLAG: --cert-dir=""
I0321 16:23:33.204135       1 flags.go:64] FLAG: --cidr-allocator-type="RangeAllocator"
I0321 16:23:33.204139       1 flags.go:64] FLAG: --client-ca-file=""
I0321 16:23:33.204142       1 flags.go:64] FLAG: --cloud-config="/etc/cloud-sa/cloud-sa.json"
I0321 16:23:33.204145       1 flags.go:64] FLAG: --cloud-provider="equinixmetal"
I0321 16:23:33.204148       1 flags.go:64] FLAG: --cluster-cidr=""
I0321 16:23:33.204151       1 flags.go:64] FLAG: --cluster-name="kubernetes"
I0321 16:23:33.204154       1 flags.go:64] FLAG: --concurrent-service-syncs="1"
I0321 16:23:33.204158       1 flags.go:64] FLAG: --configure-cloud-routes="true"
I0321 16:23:33.204162       1 flags.go:64] FLAG: --contention-profiling="false"
I0321 16:23:33.204165       1 flags.go:64] FLAG: --controller-start-interval="0s"
I0321 16:23:33.204167       1 flags.go:64] FLAG: --controllers="[*]"
I0321 16:23:33.204172       1 flags.go:64] FLAG: --enable-leader-migration="false"
I0321 16:23:33.204175       1 flags.go:64] FLAG: --external-cloud-volume-plugin=""
I0321 16:23:33.204178       1 flags.go:64] FLAG: --feature-gates=""
I0321 16:23:33.204182       1 flags.go:64] FLAG: --help="false"
I0321 16:23:33.204185       1 flags.go:64] FLAG: --http2-max-streams-per-connection="0"
I0321 16:23:33.204190       1 flags.go:64] FLAG: --kube-api-burst="30"
I0321 16:23:33.204194       1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
I0321 16:23:33.204197       1 flags.go:64] FLAG: --kube-api-qps="20"
I0321 16:23:33.204210       1 flags.go:64] FLAG: --kubeconfig=""
I0321 16:23:33.204213       1 flags.go:64] FLAG: --leader-elect="true"
I0321 16:23:33.204217       1 flags.go:64] FLAG: --leader-elect-lease-duration="15s"
I0321 16:23:33.204221       1 flags.go:64] FLAG: --leader-elect-renew-deadline="10s"
I0321 16:23:33.204225       1 flags.go:64] FLAG: --leader-elect-resource-lock="leases"
I0321 16:23:33.204228       1 flags.go:64] FLAG: --leader-elect-resource-name="cloud-controller-manager"
I0321 16:23:33.204232       1 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system"
I0321 16:23:33.204236       1 flags.go:64] FLAG: --leader-elect-retry-period="2s"
I0321 16:23:33.204240       1 flags.go:64] FLAG: --leader-migration-config=""
I0321 16:23:33.204243       1 flags.go:64] FLAG: --log-flush-frequency="5s"
I0321 16:23:33.204248       1 flags.go:64] FLAG: --master=""
I0321 16:23:33.204252       1 flags.go:64] FLAG: --min-resync-period="12h0m0s"
I0321 16:23:33.204256       1 flags.go:64] FLAG: --node-monitor-period="5s"
I0321 16:23:33.204259       1 flags.go:64] FLAG: --node-status-update-frequency="5m0s"
I0321 16:23:33.204263       1 flags.go:64] FLAG: --node-sync-period="0s"
I0321 16:23:33.204267       1 flags.go:64] FLAG: --permit-address-sharing="false"
I0321 16:23:33.204270       1 flags.go:64] FLAG: --permit-port-sharing="false"
I0321 16:23:33.204274       1 flags.go:64] FLAG: --profiling="true"
I0321 16:23:33.204277       1 flags.go:64] FLAG: --requestheader-allowed-names="[]"
I0321 16:23:33.204288       1 flags.go:64] FLAG: --requestheader-client-ca-file=""
I0321 16:23:33.204291       1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]"
I0321 16:23:33.204296       1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]"
I0321 16:23:33.204302       1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]"
I0321 16:23:33.204307       1 flags.go:64] FLAG: --route-reconciliation-period="10s"
I0321 16:23:33.204310       1 flags.go:64] FLAG: --secure-port="10258"
I0321 16:23:33.204314       1 flags.go:64] FLAG: --tls-cert-file=""
I0321 16:23:33.204318       1 flags.go:64] FLAG: --tls-cipher-suites="[]"
I0321 16:23:33.204323       1 flags.go:64] FLAG: --tls-min-version=""
I0321 16:23:33.204327       1 flags.go:64] FLAG: --tls-private-key-file=""
I0321 16:23:33.204330       1 flags.go:64] FLAG: --tls-sni-cert-key="[]"
I0321 16:23:33.204337       1 flags.go:64] FLAG: --use-service-account-credentials="false"
I0321 16:23:33.204341       1 flags.go:64] FLAG: --v="5"
I0321 16:23:33.204344       1 flags.go:64] FLAG: --version="false"
I0321 16:23:33.204352       1 flags.go:64] FLAG: --vmodule=""
I0321 16:23:34.261045       1 serving.go:348] Generated self-signed cert in-memory
W0321 16:23:34.261071       1 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0321 16:23:34.627360       1 config.go:210] authToken: '<masked>'
I0321 16:23:34.627376       1 config.go:210] projectID: '049798ab-0320-4ce8-ad57-ef7742bd144f'
I0321 16:23:34.627381       1 config.go:210] loadbalancer config: disabled
I0321 16:23:34.627386       1 config.go:210] metro: ''
I0321 16:23:34.627390       1 config.go:210] facility: ''
I0321 16:23:34.627394       1 config.go:210] local ASN: '65000'
I0321 16:23:34.627398       1 config.go:210] Elastic IP Tag: ''
I0321 16:23:34.627402       1 config.go:210] API Server Port: '0'
I0321 16:23:34.627406       1 config.go:210] BGP Node Selector: ''
I0321 16:23:34.627410       1 config.go:210] Load Balancer ID: ''
I0321 16:23:34.627436       1 cloud.go:152] called HasClusterID
I0321 16:23:34.627457       1 controllermanager.go:152] Version: v3.8.0
I0321 16:23:34.627523       1 healthz.go:176] Installing health checkers for (/healthz): "leaderElection"
I0321 16:23:34.629240       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1711038214\" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer=\"localhost-ca@1711038213\" (2024-03-21 15:23:33 +0000 UTC to 2025-03-21 15:23:33 +0000 UTC (now=2024-03-21 16:23:34.629195596 +0000 UTC))"
I0321 16:23:34.629785       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1711038214\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1711038214\" (2024-03-21 15:23:34 +0000 UTC to 2025-03-21 15:23:34 +0000 UTC (now=2024-03-21 16:23:34.629749204 +0000 UTC))"
I0321 16:23:34.629811       1 secure_serving.go:213] Serving securely on [::]:10258
I0321 16:23:34.630013       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0321 16:23:34.630101       1 leaderelection.go:248] attempting to acquire leader lease kube-system/cloud-controller-manager...
I0321 16:23:34.643143       1 leaderelection.go:352] lock is held by stage-da-vmaas-vede716-controller-8rfzk_75002a10-94d0-4c02-a3f4-1e549ff30bbe and has not yet expired
I0321 16:23:34.643170       1 leaderelection.go:253] failed to acquire lease kube-system/cloud-controller-manager
I0321 16:23:36.998504       1 leaderelection.go:352] lock is held by stage-da-vmaas-vede716-controller-8rfzk_75002a10-94d0-4c02-a3f4-1e549ff30bbe and has not yet expired
I0321 16:23:36.998540       1 leaderelection.go:253] failed to acquire lease kube-system/cloud-controller-manager
I0321 16:23:40.498467       1 leaderelection.go:352] lock is held by stage-da-vmaas-vede716-controller-8rfzk_75002a10-94d0-4c02-a3f4-1e549ff30bbe and has not yet expired
I0321 16:23:40.498496       1 leaderelection.go:253] failed to acquire lease kube-system/cloud-controller-manager

I0321 16:23:44.616003       1 leaderelection.go:352] lock is held by stage-da-vmaas-vede716-controller-8rfzk_75002a10-94d0-4c02-a3f4-1e549ff30bbe and has not yet expired
I0321 16:23:44.616034       1 leaderelection.go:253] failed to acquire lease kube-system/cloud-controller-manager
I0321 16:23:48.354069       1 leaderelection.go:352] lock is held by stage-da-vmaas-vede716-controller-8rfzk_75002a10-94d0-4c02-a3f4-1e549ff30bbe and has not yet expired
I0321 16:23:48.354098       1 leaderelection.go:253] failed to acquire lease kube-system/cloud-controller-manager
I0321 16:23:52.404780       1 leaderelection.go:258] successfully acquired lease kube-system/cloud-controller-manager
I0321 16:23:52.405137       1 event.go:294] "Event occurred" object="kube-system/cloud-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="stage-da-vmaas-vede716-controller-8rfzk_1fd9719c-b591-4da4-b454-8077b03c839b became leader"
I0321 16:23:52.409017       1 discovery.go:214] Invalidating discovery information
I0321 16:23:52.413232       1 leaderelection.go:278] successfully renewed lease kube-system/cloud-controller-manager
I0321 16:23:52.416460       1 cloud.go:78] called Initialize
I0321 16:23:52.416806       1 eip_controlplane_reconciliation.go:68] newControlPlaneEndpointManager()
I0321 16:23:52.416822       1 eip_controlplane_reconciliation.go:71] EIP Tag is not configured skipping control plane endpoint management.
I0321 16:23:52.416833       1 controlplane_load_balancer_manager.go:38] newControlPlaneLoadBalancerManager()
I0321 16:23:52.416843       1 controlplane_load_balancer_manager.go:41] Load balancer ID is not configured, skipping control plane load balancer management
I0321 16:23:52.416853       1 bgp.go:31] bgp.init(): enabling BGP on project
I0321 16:23:52.743973       1 bgp.go:35] bgp.init(): BGP enabled
I0321 16:23:52.744007       1 loadbalancers.go:70] loadBalancers.init(): no loadbalancer implementation config, skipping
I0321 16:23:52.744025       1 cloud.go:104] Initialize of cloud provider complete
I0321 16:23:52.744044       1 controllermanager.go:292] Starting "service"
I0321 16:23:52.744658       1 cloud.go:109] called LoadBalancer
E0321 16:23:52.744676       1 core.go:93] Failed to start service controller: the cloud provider does not support external load balancers
W0321 16:23:52.744701       1 controllermanager.go:299] Skipping "service"
I0321 16:23:52.744715       1 controllermanager.go:292] Starting "cloud-node"
I0321 16:23:52.745008       1 cloud.go:115] called Instances
I0321 16:23:52.745019       1 cloud.go:121] called InstancesV2
I0321 16:23:52.745116       1 controllermanager.go:311] Started "cloud-node"
I0321 16:23:52.745133       1 controllermanager.go:292] Starting "cloud-node-lifecycle"
I0321 16:23:52.745290       1 node_controller.go:157] Sending events to api server.
I0321 16:23:52.745402       1 cloud.go:115] called Instances
I0321 16:23:52.745406       1 node_controller.go:166] Waiting for informer caches to sync
I0321 16:23:52.745412       1 cloud.go:121] called InstancesV2
I0321 16:23:52.745445       1 controllermanager.go:311] Started "cloud-node-lifecycle"
I0321 16:23:52.745472       1 healthz.go:176] Installing health checkers for (/healthz): "leaderElection","cloud-node","cloud-node-lifecycle"
I0321 16:23:52.745566       1 node_lifecycle_controller.go:113] Sending events to api server
I0321 16:23:52.747684       1 reflector.go:221] Starting reflector *v1.Node (1m40s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169
I0321 16:23:52.747716       1 reflector.go:257] Listing and watching *v1.Node from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169
I0321 16:23:52.747686       1 reflector.go:221] Starting reflector *v1.Service (30s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169
I0321 16:23:52.747767       1 reflector.go:257] Listing and watching *v1.Service from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169
I0321 16:23:52.846236       1 shared_informer.go:300] caches populated
I0321 16:23:52.846347       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-controller-8rfzk with cloud provider
I0321 16:23:52.846453       1 cloud.go:121] called InstancesV2
I0321 16:23:52.846465       1 cloud.go:121] called InstancesV2
I0321 16:23:52.846502       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-controller-8rfzk
I0321 16:23:52.848896       1 cloud.go:121] called InstancesV2
I0321 16:23:52.848920       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-controller-8rfzk
I0321 16:23:53.494100       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-controller-8rfzk
E0321 16:23:53.494192       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-controller-8rfzk': failed to get instance metadata for node stage-da-vmaas-vede716-controller-8rfzk: instance not found, requeuing
I0321 16:23:53.494235       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-worker-9fc8z with cloud provider
I0321 16:23:53.494287       1 cloud.go:121] called InstancesV2
I0321 16:23:53.494298       1 cloud.go:121] called InstancesV2
I0321 16:23:53.494309       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-worker-9fc8z
I0321 16:23:53.547872       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-controller-8rfzk
E0321 16:23:53.547901       1 node_controller.go:258] Error getting instance metadata for node addresses: instance not found
I0321 16:23:53.547911       1 cloud.go:121] called InstancesV2
I0321 16:23:53.547932       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-worker-9fc8z
I0321 16:23:53.969683       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-worker-9fc8z
E0321 16:23:53.969769       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-worker-9fc8z': failed to get instance metadata for node stage-da-vmaas-vede716-worker-9fc8z: instance not found, requeuing
I0321 16:23:53.969799       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-controller-8rfzk with cloud provider
I0321 16:23:53.969841       1 cloud.go:121] called InstancesV2
I0321 16:23:53.969850       1 cloud.go:121] called InstancesV2
I0321 16:23:53.969860       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-controller-8rfzk
I0321 16:23:54.363303       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-worker-9fc8z
E0321 16:23:54.363345       1 node_controller.go:258] Error getting instance metadata for node addresses: instance not found
I0321 16:23:54.425151       1 leaderelection.go:278] successfully renewed lease kube-system/cloud-controller-manager
I0321 16:23:54.433993       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-controller-8rfzk
E0321 16:23:54.434088       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-controller-8rfzk': failed to get instance metadata for node stage-da-vmaas-vede716-controller-8rfzk: instance not found, requeuing
I0321 16:23:54.434120       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-worker-9fc8z with cloud provider
I0321 16:23:54.434159       1 cloud.go:121] called InstancesV2
I0321 16:23:54.434168       1 cloud.go:121] called InstancesV2
I0321 16:23:54.434178       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-worker-9fc8z
I0321 16:23:55.104188       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-worker-9fc8z
E0321 16:23:55.104267       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-worker-9fc8z': failed to get instance metadata for node stage-da-vmaas-vede716-worker-9fc8z: instance not found, requeuing
I0321 16:23:55.104296       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-controller-8rfzk with cloud provider
I0321 16:23:55.104338       1 cloud.go:121] called InstancesV2
I0321 16:23:55.104347       1 cloud.go:121] called InstancesV2
I0321 16:23:55.104357       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-controller-8rfzk
I0321 16:23:55.609663       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-controller-8rfzk
E0321 16:23:55.609726       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-controller-8rfzk': failed to get instance metadata for node stage-da-vmaas-vede716-controller-8rfzk: instance not found, requeuing
I0321 16:23:55.609748       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-worker-9fc8z with cloud provider
I0321 16:23:55.609776       1 cloud.go:121] called InstancesV2
I0321 16:23:55.609782       1 cloud.go:121] called InstancesV2
I0321 16:23:55.609790       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-worker-9fc8z
I0321 16:23:56.084569       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-worker-9fc8z
E0321 16:23:56.084661       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-worker-9fc8z': failed to get instance metadata for node stage-da-vmaas-vede716-worker-9fc8z: instance not found, requeuing
I0321 16:23:56.084690       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-controller-8rfzk with cloud provider
I0321 16:23:56.084731       1 cloud.go:121] called InstancesV2
I0321 16:23:56.084740       1 cloud.go:121] called InstancesV2
I0321 16:23:56.084751       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-controller-8rfzk
I0321 16:23:56.435432       1 leaderelection.go:278] successfully renewed lease kube-system/cloud-controller-manager
I0321 16:23:56.619023       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-controller-8rfzk
E0321 16:23:56.619109       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-controller-8rfzk': failed to get instance metadata for node stage-da-vmaas-vede716-controller-8rfzk: instance not found, requeuing
I0321 16:23:56.619142       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-worker-9fc8z with cloud provider
I0321 16:23:56.619184       1 cloud.go:121] called InstancesV2
I0321 16:23:56.619195       1 cloud.go:121] called InstancesV2
I0321 16:23:56.619206       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-worker-9fc8z
I0321 16:23:57.196015       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-worker-9fc8z
E0321 16:23:57.196088       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-worker-9fc8z': failed to get instance metadata for node stage-da-vmaas-vede716-worker-9fc8z: instance not found, requeuing
I0321 16:23:57.196118       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-controller-8rfzk with cloud provider
I0321 16:23:57.196158       1 cloud.go:121] called InstancesV2
I0321 16:23:57.196167       1 cloud.go:121] called InstancesV2
I0321 16:23:57.196177       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-controller-8rfzk
I0321 16:23:57.682947       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-controller-8rfzk
E0321 16:23:57.683006       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-controller-8rfzk': failed to get instance metadata for node stage-da-vmaas-vede716-controller-8rfzk: instance not found, requeuing
I0321 16:23:57.683029       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-worker-9fc8z with cloud provider
I0321 16:23:57.683082       1 cloud.go:121] called InstancesV2
I0321 16:23:57.683099       1 cloud.go:121] called InstancesV2
I0321 16:23:57.683917       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-worker-9fc8z
I0321 16:23:58.153242       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-worker-9fc8z
E0321 16:23:58.153298       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-worker-9fc8z': failed to get instance metadata for node stage-da-vmaas-vede716-worker-9fc8z: instance not found, requeuing
I0321 16:23:58.153319       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-controller-8rfzk with cloud provider
I0321 16:23:58.153347       1 cloud.go:121] called InstancesV2
I0321 16:23:58.153353       1 cloud.go:121] called InstancesV2
I0321 16:23:58.153360       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-controller-8rfzk
I0321 16:23:58.447647       1 leaderelection.go:278] successfully renewed lease kube-system/cloud-controller-manager
I0321 16:23:58.665487       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-controller-8rfzk
E0321 16:23:58.665562       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-controller-8rfzk': failed to get instance metadata for node stage-da-vmaas-vede716-controller-8rfzk: instance not found, requeuing
I0321 16:23:58.665598       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-worker-9fc8z with cloud provider
I0321 16:23:58.665636       1 cloud.go:121] called InstancesV2
I0321 16:23:58.665646       1 cloud.go:121] called InstancesV2
I0321 16:23:58.665656       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-worker-9fc8z
I0321 16:23:59.160365       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-worker-9fc8z
E0321 16:23:59.160464       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-worker-9fc8z': failed to get instance metadata for node stage-da-vmaas-vede716-worker-9fc8z: instance not found, requeuing
I0321 16:23:59.160505       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-controller-8rfzk with cloud provider
I0321 16:23:59.160567       1 cloud.go:121] called InstancesV2
I0321 16:23:59.160583       1 cloud.go:121] called InstancesV2
I0321 16:23:59.160610       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-controller-8rfzk
I0321 16:23:59.583294       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-controller-8rfzk
E0321 16:23:59.583368       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-controller-8rfzk': failed to get instance metadata for node stage-da-vmaas-vede716-controller-8rfzk: instance not found, requeuing
I0321 16:23:59.583395       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-worker-9fc8z with cloud provider
I0321 16:23:59.583432       1 cloud.go:121] called InstancesV2
I0321 16:23:59.583442       1 cloud.go:121] called InstancesV2
I0321 16:23:59.583452       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-worker-9fc8z
I0321 16:24:00.084253       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-worker-9fc8z
E0321 16:24:00.084334       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-worker-9fc8z': failed to get instance metadata for node stage-da-vmaas-vede716-worker-9fc8z: instance not found, requeuing
I0321 16:24:00.084364       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-controller-8rfzk with cloud provider
I0321 16:24:00.084410       1 cloud.go:121] called InstancesV2
I0321 16:24:00.084420       1 cloud.go:121] called InstancesV2
I0321 16:24:00.084430       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-controller-8rfzk
I0321 16:24:00.457786       1 leaderelection.go:278] successfully renewed lease kube-system/cloud-controller-manager
I0321 16:24:00.702462       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-controller-8rfzk
E0321 16:24:00.702541       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-controller-8rfzk': failed to get instance metadata for node stage-da-vmaas-vede716-controller-8rfzk: instance not found, requeuing
I0321 16:24:00.702570       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-worker-9fc8z with cloud provider
I0321 16:24:00.702621       1 cloud.go:121] called InstancesV2
I0321 16:24:00.702632       1 cloud.go:121] called InstancesV2
I0321 16:24:00.702643       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-worker-9fc8z
I0321 16:24:01.418616       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-worker-9fc8z
E0321 16:24:01.418703       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-worker-9fc8z': failed to get instance metadata for node stage-da-vmaas-vede716-worker-9fc8z: instance not found, requeuing
I0321 16:24:01.418732       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-controller-8rfzk with cloud provider
I0321 16:24:01.418774       1 cloud.go:121] called InstancesV2
I0321 16:24:01.418784       1 cloud.go:121] called InstancesV2
I0321 16:24:01.418794       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-controller-8rfzk
I0321 16:24:01.942651       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-controller-8rfzk
E0321 16:24:01.942733       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-controller-8rfzk': failed to get instance metadata for node stage-da-vmaas-vede716-controller-8rfzk: instance not found, requeuing
I0321 16:24:02.059175       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-worker-9fc8z with cloud provider
I0321 16:24:02.059262       1 cloud.go:121] called InstancesV2
I0321 16:24:02.059280       1 cloud.go:121] called InstancesV2
I0321 16:24:02.059298       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-worker-9fc8z
I0321 16:24:02.467663       1 leaderelection.go:278] successfully renewed lease kube-system/cloud-controller-manager
I0321 16:24:02.613056       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-worker-9fc8z
E0321 16:24:02.613119       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-worker-9fc8z': failed to get instance metadata for node stage-da-vmaas-vede716-worker-9fc8z: instance not found, requeuing
I0321 16:24:03.223824       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-controller-8rfzk with cloud provider
I0321 16:24:03.223895       1 cloud.go:121] called InstancesV2
I0321 16:24:03.223906       1 cloud.go:121] called InstancesV2
I0321 16:24:03.223916       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-controller-8rfzk
I0321 16:24:03.754954       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-controller-8rfzk
E0321 16:24:03.755038       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-controller-8rfzk': failed to get instance metadata for node stage-da-vmaas-vede716-controller-8rfzk: instance not found, requeuing
I0321 16:24:03.893507       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-worker-9fc8z with cloud provider
I0321 16:24:03.893616       1 cloud.go:121] called InstancesV2
I0321 16:24:03.893634       1 cloud.go:121] called InstancesV2
I0321 16:24:03.893652       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-worker-9fc8z
I0321 16:24:04.477799       1 leaderelection.go:278] successfully renewed lease kube-system/cloud-controller-manager
I0321 16:24:04.550984       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-worker-9fc8z
E0321 16:24:04.551068       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-worker-9fc8z': failed to get instance metadata for node stage-da-vmaas-vede716-worker-9fc8z: instance not found, requeuing
I0321 16:24:06.315521       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-controller-8rfzk with cloud provider
I0321 16:24:06.315605       1 cloud.go:121] called InstancesV2
I0321 16:24:06.315617       1 cloud.go:121] called InstancesV2
I0321 16:24:06.315628       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-controller-8rfzk
I0321 16:24:06.487915       1 leaderelection.go:278] successfully renewed lease kube-system/cloud-controller-manager
I0321 16:24:07.019218       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-controller-8rfzk
E0321 16:24:07.019285       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-controller-8rfzk': failed to get instance metadata for node stage-da-vmaas-vede716-controller-8rfzk: instance not found, requeuing
I0321 16:24:07.111806       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-worker-9fc8z with cloud provider
I0321 16:24:07.111879       1 cloud.go:121] called InstancesV2
I0321 16:24:07.111891       1 cloud.go:121] called InstancesV2
I0321 16:24:07.111901       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-worker-9fc8z
I0321 16:24:07.635128       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-worker-9fc8z
E0321 16:24:07.635212       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-worker-9fc8z': failed to get instance metadata for node stage-da-vmaas-vede716-worker-9fc8z: instance not found, requeuing
I0321 16:24:08.498862       1 leaderelection.go:278] successfully renewed lease kube-system/cloud-controller-manager
I0321 16:24:10.508265       1 leaderelection.go:278] successfully renewed lease kube-system/cloud-controller-manager
I0321 16:24:12.139715       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-controller-8rfzk with cloud provider
I0321 16:24:12.139794       1 cloud.go:121] called InstancesV2
I0321 16:24:12.139806       1 cloud.go:121] called InstancesV2
I0321 16:24:12.139817       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-controller-8rfzk
I0321 16:24:12.518699       1 leaderelection.go:278] successfully renewed lease kube-system/cloud-controller-manager
I0321 16:24:12.895846       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-controller-8rfzk
E0321 16:24:12.895910       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-controller-8rfzk': failed to get instance metadata for node stage-da-vmaas-vede716-controller-8rfzk: instance not found, requeuing
I0321 16:24:12.895932       1 node_controller.go:415] Initializing node stage-da-vmaas-vede716-worker-9fc8z with cloud provider
I0321 16:24:12.895959       1 cloud.go:121] called InstancesV2
I0321 16:24:12.895965       1 cloud.go:121] called InstancesV2
I0321 16:24:12.895972       1 devices.go:167] called deviceByName with projectID 049798ab-0320-4ce8-ad57-ef7742bd144f nodeName stage-da-vmaas-vede716-worker-9fc8z
I0321 16:24:13.379326       1 devices.go:184] No device found for nodeName stage-da-vmaas-vede716-worker-9fc8z
E0321 16:24:13.379403       1 node_controller.go:229] error syncing 'stage-da-vmaas-vede716-worker-9fc8z': failed to get instance metadata for node stage-da-vmaas-vede716-worker-9fc8z: instance not found, requeuing
I0321 16:24:14.530010       1 leaderelection.go:278] successfully renewed lease kube-system/cloud-controller-manager
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 19, 2024
@cprivitere
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 27, 2024
@cprivitere
Copy link
Member

/triage accepted

@k8s-ci-robot k8s-ci-robot added the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Jun 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

4 participants