Why does Connaisseur run a python process at port 5000 at node level? #1441
-
We observed an issue where creating Connaissuer replicas on the same Kubernetes node is not possible. Upon investigation, we discovered that Connaissuer actually runs a Python process at the node level on port 5000. Consequently, when a new pod is scheduled on the same node, it fails to start due to a complaint about an existing process already running on it. This poses a problem because:
pod unable to run on the same node-
process at the node level-
Curious to know why Connaissuer is designed this way? Thank you |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
hoy @inboxamitraj! this is in no way intentionally designed this way and also I'm curious how you ran into this problem. in the default setup, Connaisseur runs with 3 pods, each of them with port 5000 (this is the default port for flask applications) inside their own container. the container runtime should be responsible for doing some kind of mapping between ports inside the container and outside of the container (so the host machine). so running multiple Connaiseur pods on the same node should be no problem. I'm also unable to replicate this behavior in my minikube setup (which uses a single node): docker@minikube:~$ netstat -plnt
(No info could be read for "-p": geteuid()=1000 but you should be root.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.11:36135 0.0.0.0:* LISTEN -
tcp 0 0 192.168.49.2:10010 0.0.0.0:* LISTEN -
tcp 0 0 192.168.49.2:2380 0.0.0.0:* LISTEN -
tcp 0 0 192.168.49.2:2379 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:2381 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN -
tcp6 0 0 :::8443 :::* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 :::2376 :::* LISTEN -
tcp6 0 0 :::10256 :::* LISTEN -
tcp6 0 0 :::10250 :::* LISTEN -
tcp6 0 0 :::10249 :::* LISTEN -
tcp6 0 0 :::44133 :::* LISTEN -
docker@minikube:~$ ps -aef | grep connaisseur
10001 7810 7777 0 10:19 ? 00:00:01 python -m connaisseur
10001 7854 7801 0 10:19 ? 00:00:01 python -m connaisseur
10001 7871 7822 0 10:19 ? 00:00:01 python -m connaisseur
docker 13643 8429 0 10:25 pts/1 00:00:00 grep --color=auto connaisseur What kind of setup are you using (e.g. flavor of k8s, any major changes on the default configuration)? |
Beta Was this translation helpful? Give feedback.
hoy @inboxamitraj! this is in no way intentionally designed this way and also I'm curious how you ran into this problem.
in the default setup, Connaisseur runs with 3 pods, each of them with port 5000 (this is the default port for flask applications) inside their own container. the container runtime should be responsible for doing some kind of mapping between ports inside the container and outside of the container (so the host machine). so running multiple Connaiseur pods on the same node should be no problem.
I'm also unable to replicate this behavior in my minikube setup (which uses a single node):