You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was deploying a cluster with 1 master and 2 workers. Master and one of the workers went OK. I could log in to master and see itself and the worker as part of the cluster. However, the other worker wasn't there.
The VM for the faulty worker was created and jobs executed, but I found this:
[INFO] systemd: Starting k3s-agent
Job for k3s-agent.service failed because the control process exited with error code.
See "systemctl status k3s-agent.service" and "journalctl -xeu k3s-agent.service" for details.
INFO [Fri Nov 15 12:58:28 UTC 2024]: Configuration done successfully in 3 seconds
Then, looking at k3s-agent.service:
INFO [Fri Nov 15 12:58:28 UTC 2024]: Configuration done successfully in 3 seconds
ubuntu@n13342-2-dummy-app-worker-1-1-13342-2:~$ systemctl status k3s-agent.service
● k3s-agent.service - Lightweight Kubernetes
Loaded: loaded (/etc/systemd/system/k3s-agent.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Fri 2024-11-15 14:21:42 UTC; 2s ago
Docs: https://k3s.io
Process: 135541 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service 2>/dev/null (code=exited, status=0/SUCCESS)
Process: 135543 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 135544 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Process: 135545 ExecStart=/usr/local/bin/k3s agent --node-ip (code=exited, status=1/FAILURE)
Main PID: 135545 (code=exited, status=1/FAILURE)
CPU: 16ms
Nov 15 14:21:42 n13342-2-dummy-app-worker-1-1-13342-2 systemd[1]: k3s-agent.service: Main process exited, code=exited, status=1/FAILURE
Nov 15 14:21:42 n13342-2-dummy-app-worker-1-1-13342-2 systemd[1]: k3s-agent.service: Failed with result 'exit-code'.
Nov 15 14:21:42 n13342-2-dummy-app-worker-1-1-13342-2 systemd[1]: Failed to start Lightweight Kubernetes.
ubuntu@n13342-2-dummy-app-worker-1-1-13342-2:~$ journalctl -xeu k3s-agent.service
Nov 15 14:33:07 n13342-2-dummy-app-worker-1-1-13342-2 sh[147410]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Nov 15 14:33:07 n13342-2-dummy-app-worker-1-1-13342-2 k3s[147414]: Incorrect Usage: flag needs an argument: -node-ip
The error is:
Incorrect Usage: flag needs an argument: -node-ip
That error comes from the worker install script. More precisely:
On DEV environment (K3s)
I was deploying a cluster with 1 master and 2 workers. Master and one of the workers went OK. I could log in to master and see itself and the worker as part of the cluster. However, the other worker wasn't there.
The VM for the faulty worker was created and jobs executed, but I found this:
Then, looking at k3s-agent.service:
The error is:
That error comes from the worker install script. More precisely:
Inside https://raw.githubusercontent.com/eu-nebulous/sal-scripts/dev/k3s/install-kube-k3s-agent-u22-wg.sh
This command is executed:
The problem should have been that WIREGUARD_VPN_IP was empty.
However, in previous steps of the installation process for the same node, I can see the logs:
Running the command
Manually worked fine and the node joined the cluster.
The text was updated successfully, but these errors were encountered: