Replies: 2 comments 1 reply
-
What Linux distribution are you using? Do you have an existing iptables firewall (firewall or ufw) with conflicting rules, or have you configured iptables on your nodes to have a default deny or drop policy? |
Beta Was this translation helpful? Give feedback.
1 reply
-
What was the reason for this issue? I am also experiencing it Ufw is disabled via Only being able to access the apps from the node that is actually running the pod.
I do run this script inside the VMs #!/bin/bash
#
# Sets up the kernel with the requirements for running Kubernetes
set -e
# Add br_netfilter kernel module
cat <<EOF >>/etc/modules
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
br_netfilter
nf_conntrack
EOF
systemctl restart systemd-modules-load.service
# Set network tunables
cat <<EOF >>/etc/sysctl.d/10-kubernetes.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.ip_forward=1
EOF
sysctl --system
ufw disable |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Environmental Info:
K3s Version:
k3s version v1.28.2+k3s1 (6330a5b)
go version go1.20.8
Node(s) CPU architecture, OS, and Version:
Linux k3s-worker-0 6.5.0-rc5-edge-rockchip-rk3588 #3 SMP PREEMPT Sun Aug 6 22:07:51 UTC 2023 aarch64 GNU/Linux
Cluster Configuration:
2 master nodes, 2 worker nodes
Describe the bug:
I've set up a service with a NodePort and it is only reachable on the node which the pod is running on.
Service definition:
Steps To Reproduce:
k3s installed using k3s-ansible project with HA. Server args:
Expected behavior:
NodePort should be reachable on all nodes
Actual behavior:
NodePort only working on node with the pod running
Beta Was this translation helpful? Give feedback.
All reactions