Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Notify the user that support static-ip is not supported with multiple node (-n <num> ) #18567

Open
panktrip opened this issue Apr 3, 2024 · 6 comments · May be fixed by #19747
Open

Notify the user that support static-ip is not supported with multiple node (-n <num> ) #18567

panktrip opened this issue Apr 3, 2024 · 6 comments · May be fixed by #19747
Assignees
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@panktrip
Copy link

panktrip commented Apr 3, 2024

What Happened?

My objective is to create multinode cluster using minikube. so I decided to use -n option. but at the same time, I also want static ip to all my nodes (including worker node). so I ran the following command:

minikube start -p p1 -n 3 --static-ip 192.168.200.200

but it failed. I think minikube is trying to plumb the same ip (192.168.200.200) on all the nodes ( 3 nodes in this case). but its common sense that we can't plumb the same ip on all node (in multinode cluster).

minikube spit out the below error, which makes sense.

Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: can't create with that IP, address already in use

my ask is to plumb the other ip from the same subnet on the worker-node and to make the multinode cluster up rather than failing it. in this case node n1 can have 192.168.200.201 and n2 can have 192.168.200.202, or any other ip from 192.168.200.200/24 subnet .

Attach the log file

I0403 22:26:46.563695 20869 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
I0403 22:26:46.563742 20869 kubeadm.go:322] [preflight] Running pre-flight checks
I0403 22:26:46.844038 20869 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0403 22:26:46.844102 20869 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0403 22:26:46.844165 20869 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0403 22:26:47.086942 20869 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0403 22:26:47.099655 20869 out.go:204] ▪ Generating certificates and keys ...
I0403 22:26:47.099885 20869 kubeadm.go:322] [certs] Using existing ca certificate authority
I0403 22:26:47.099964 20869 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0403 22:26:47.184411 20869 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0403 22:26:47.519423 20869 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0403 22:26:47.673563 20869 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0403 22:26:47.937612 20869 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0403 22:26:47.993854 20869 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0403 22:26:47.994004 20869 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost p1] and IPs [192.168.200.200 127.0.0.1 ::1]
I0403 22:26:48.199115 20869 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0403 22:26:48.199254 20869 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost p1] and IPs [192.168.200.200 127.0.0.1 ::1]
I0403 22:26:48.333221 20869 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0403 22:26:48.459562 20869 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0403 22:26:48.595866 20869 kubeadm.go:322] [certs] Generating "sa" key and public key
I0403 22:26:48.595975 20869 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0403 22:26:48.659863 20869 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0403 22:26:48.790204 20869 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0403 22:26:48.957386 20869 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0403 22:26:49.070608 20869 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0403 22:26:49.070887 20869 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0403 22:26:49.085219 20869 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0403 22:26:49.091323 20869 out.go:204] ▪ Booting up control plane ...
I0403 22:26:49.091741 20869 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0403 22:26:49.091904 20869 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0403 22:26:49.091951 20869 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0403 22:26:49.099888 20869 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0403 22:26:49.100983 20869 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0403 22:26:49.101300 20869 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0403 22:26:49.212500 20869 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0403 22:26:55.716498 20869 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.504178 seconds
I0403 22:26:55.716637 20869 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0403 22:26:55.734438 20869 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0403 22:26:56.268056 20869 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
I0403 22:26:56.268188 20869 kubeadm.go:322] [mark-control-plane] Marking the node p1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0403 22:26:56.795781 20869 kubeadm.go:322] [bootstrap-token] Using token: p6sift.kgko9ja1n5w3ttvg
I0403 22:26:56.801355 20869 out.go:204] ▪ Configuring RBAC rules ...
I0403 22:26:56.801582 20869 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0403 22:26:56.808343 20869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0403 22:26:56.819493 20869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0403 22:26:56.823664 20869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0403 22:26:56.827638 20869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0403 22:26:56.831793 20869 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0403 22:26:56.859668 20869 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0403 22:26:57.220557 20869 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
I0403 22:26:57.257002 20869 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
I0403 22:26:57.257950 20869 kubeadm.go:322]
I0403 22:26:57.258019 20869 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
I0403 22:26:57.258024 20869 kubeadm.go:322]
I0403 22:26:57.258112 20869 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
I0403 22:26:57.258116 20869 kubeadm.go:322]
I0403 22:26:57.258140 20869 kubeadm.go:322] mkdir -p $HOME/.kube
I0403 22:26:57.258201 20869 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0403 22:26:57.258253 20869 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0403 22:26:57.258257 20869 kubeadm.go:322]
I0403 22:26:57.258310 20869 kubeadm.go:322] Alternatively, if you are the root user, you can run:
I0403 22:26:57.258313 20869 kubeadm.go:322]
I0403 22:26:57.258361 20869 kubeadm.go:322] export KUBECONFIG=/etc/kubernetes/admin.conf
I0403 22:26:57.258364 20869 kubeadm.go:322]
I0403 22:26:57.258428 20869 kubeadm.go:322] You should now deploy a pod network to the cluster.
I0403 22:26:57.258501 20869 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0403 22:26:57.258567 20869 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0403 22:26:57.258571 20869 kubeadm.go:322]
I0403 22:26:57.258665 20869 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
I0403 22:26:57.258741 20869 kubeadm.go:322] and service account keys on each node and then running the following as root:
I0403 22:26:57.258746 20869 kubeadm.go:322]
I0403 22:26:57.258830 20869 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token p6sift.kgko9ja1n5w3ttvg
I0403 22:26:57.258936 20869 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:e554aa93e31b89c1a1f9a416560a1cfc2d77d8807d292fbc34b3d1e1c5bea5bd
I0403 22:26:57.258957 20869 kubeadm.go:322] --control-plane
I0403 22:26:57.258961 20869 kubeadm.go:322]
I0403 22:26:57.259056 20869 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
I0403 22:26:57.259060 20869 kubeadm.go:322]
I0403 22:26:57.259143 20869 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token p6sift.kgko9ja1n5w3ttvg
I0403 22:26:57.259242 20869 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:e554aa93e31b89c1a1f9a416560a1cfc2d77d8807d292fbc34b3d1e1c5bea5bd
I0403 22:26:57.260859 20869 kubeadm.go:322] [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
I0403 22:26:57.260940 20869 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0403 22:26:57.260954 20869 cni.go:84] Creating CNI manager for ""
I0403 22:26:57.260960 20869 cni.go:136] 1 nodes found, recommending kindnet
I0403 22:26:57.268948 20869 out.go:177] 🔗 Configuring CNI (Container Networking Interface) ...
I0403 22:26:57.276599 20869 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0403 22:26:57.286211 20869 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
I0403 22:26:57.286228 20869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
I0403 22:26:57.376547 20869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0403 22:26:58.389296 20869 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.012713142s)
I0403 22:26:58.389391 20869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0403 22:26:58.389570 20869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=8220a6eb95f0a4d75f7f2d7b14cef975f050512d minikube.k8s.io/name=p1 minikube.k8s.io/updated_at=2024_04_03T22_26_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0403 22:26:58.389600 20869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0403 22:26:58.401994 20869 ops.go:34] apiserver oom_adj: -16
I0403 22:26:58.485881 20869 kubeadm.go:1081] duration metric: took 96.335429ms to wait for elevateKubeSystemPrivileges.
I0403 22:26:58.485914 20869 kubeadm.go:406] StartCluster complete in 12.28032039s
I0403 22:26:58.485931 20869 settings.go:142] acquiring lock: {Name:mk26d73f1bfafac494fa664fd900bbab5070aac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0403 22:26:58.485999 20869 settings.go:150] Updating kubeconfig: /home/pankaj/.kube/config
I0403 22:26:58.486706 20869 lock.go:35] WriteFile acquiring /home/pankaj/.kube/config: {Name:mkfee112c807c5615167be219af646cebbf108c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0403 22:26:58.486971 20869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0403 22:26:58.487064 20869 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false
freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
I0403 22:26:58.487135 20869 addons.go:69] Setting storage-provisioner=true in profile "p1"
I0403 22:26:58.487144 20869 addons.go:69] Setting default-storageclass=true in profile "p1"
I0403 22:26:58.487149 20869 addons.go:231] Setting addon storage-provisioner=true in "p1"
I0403 22:26:58.487157 20869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "p1"
I0403 22:26:58.487164 20869 config.go:182] Loaded profile config "p1": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I0403 22:26:58.487182 20869 host.go:66] Checking if "p1" exists ...
I0403 22:26:58.487378 20869 cli_runner.go:164] Run: docker container inspect p1 --format={{.State.Status}}
I0403 22:26:58.487444 20869 cli_runner.go:164] Run: docker container inspect p1 --format={{.State.Status}}
I0403 22:26:58.515212 20869 kapi.go:248] "coredns" deployment in "kube-system" namespace and "p1" context rescaled to 1 replicas
I0403 22:26:58.515239 20869 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.200.200 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0403 22:26:58.520363 20869 out.go:177] 🔎 Verifying Kubernetes components...
I0403 22:26:58.528976 20869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0403 22:26:58.555085 20869 out.go:177] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0403 22:26:58.550893 20869 addons.go:231] Setting addon default-storageclass=true in "p1"
I0403 22:26:58.559443 20869 host.go:66] Checking if "p1" exists ...
I0403 22:26:58.559543 20869 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0403 22:26:58.559553 20869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0403 22:26:58.559600 20869 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" p1
I0403 22:26:58.559740 20869 cli_runner.go:164] Run: docker container inspect p1 --format={{.State.Status}}
I0403 22:26:58.591556 20869 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" p1
I0403 22:26:58.591579 20869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . /etc/resolv.conf.*/i \ hosts {\n 192.168.200.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0403 22:26:58.635732 20869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/pankaj/.minikube/machines/p1/id_rsa Username:docker}
I0403 22:26:58.635792 20869 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
I0403 22:26:58.635803 20869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0403 22:26:58.635849 20869 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" p1
I0403 22:26:58.670871 20869 api_server.go:52] waiting for apiserver process to appear ...
I0403 22:26:58.670899 20869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0403 22:26:58.723632 20869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/pankaj/.minikube/machines/p1/id_rsa Username:docker}
I0403 22:26:58.757282 20869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0403 22:26:58.765046 20869 start.go:926] {"host.minikube.internal": 192.168.200.1} host record injected into CoreDNS's ConfigMap
I0403 22:26:58.765085 20869 api_server.go:72] duration metric: took 249.824145ms to wait for apiserver process to appear ...
I0403 22:26:58.765096 20869 api_server.go:88] waiting for apiserver healthz status ...
I0403 22:26:58.765110 20869 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:32774/healthz ...
I0403 22:26:58.769678 20869 api_server.go:279] https://127.0.0.1:32774/healthz returned 200:
ok
I0403 22:26:58.770733 20869 api_server.go:141] control plane version: v1.28.3
I0403 22:26:58.770756 20869 api_server.go:131] duration metric: took 5.64644ms to wait for apiserver health ...
I0403 22:26:58.770763 20869 system_pods.go:43] waiting for kube-system pods to appear ...
I0403 22:26:58.775701 20869 system_pods.go:59] 4 kube-system pods found
I0403 22:26:58.775718 20869 system_pods.go:61] "etcd-p1" [c86dae55-db79-43d7-9c96-a15b3197fd13] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady
(containers with unready status: [etcd])
I0403 22:26:58.775722 20869 system_pods.go:61] "kube-apiserver-p1" [ecc36a0e-d634-4788-ac86-b79b8e31452b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0403 22:26:58.775725 20869 system_pods.go:61] "kube-controller-manager-p1" [3117b10c-0ce0-4676-9f6d-787bb17503f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0403 22:26:58.775729 20869 system_pods.go:61] "kube-scheduler-p1" [5f4f89be-4d3b-4a76-a9b3-b52272f4fe72] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0403 22:26:58.775732 20869 system_pods.go:74] duration metric: took 4.966072ms to wait for pod list to return data ...
I0403 22:26:58.775738 20869 kubeadm.go:581] duration metric: took 260.483854ms to wait for : map[apiserver:true system_pods:true] ...
I0403 22:26:58.775758 20869 node_conditions.go:102] verifying NodePressure condition ...
I0403 22:26:58.778155 20869 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
I0403 22:26:58.778166 20869 node_conditions.go:123] node cpu capacity is 8
I0403 22:26:58.778173 20869 node_conditions.go:105] duration metric: took 2.412283ms to run NodePressure ...
I0403 22:26:58.778188 20869 start.go:228] waiting for startup goroutines ...
I0403 22:26:58.826748 20869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0403 22:26:59.115945 20869 out.go:177] 🌟 Enabled addons: storage-provisioner, default-storageclass
I0403 22:26:59.120182 20869 addons.go:502] enable addons completed in 633.115177ms: enabled=[storage-provisioner default-storageclass]
I0403 22:26:59.120212 20869 start.go:233] waiting for cluster config update ...
I0403 22:26:59.120220 20869 start.go:242] writing updated cluster config ...
I0403 22:26:59.124700 20869 out.go:177]
I0403 22:26:59.129284 20869 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I0403 22:26:59.129341 20869 config.go:182] Loaded profile config "p1": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I0403 22:26:59.129380 20869 profile.go:148] Saving config to /home/pankaj/.minikube/profiles/p1/config.json ...
I0403 22:26:59.134130 20869 out.go:177] 👍 Starting worker node p1-m02 in cluster p1
I0403 22:26:59.138560 20869 cache.go:121] Beginning downloading kic base image for docker with docker
I0403 22:26:59.143140 20869 out.go:177] 🚜 Pulling base image ...
I0403 22:26:59.153293 20869 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
I0403 22:26:59.153316 20869 cache.go:56] Caching tarball of preloaded images
I0403 22:26:59.153376 20869 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
I0403 22:26:59.153429 20869 preload.go:174] Found /home/pankaj/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0403 22:26:59.153435 20869 cache.go:59] Finished verifying existence of preloaded tar for v1.28.3 on docker
I0403 22:26:59.153531 20869 profile.go:148] Saving config to /home/pankaj/.minikube/profiles/p1/config.json ...
I0403 22:26:59.245196 20869 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
I0403 22:26:59.245212 20869 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
I0403 22:26:59.245236 20869 cache.go:194] Successfully downloaded all kic artifacts
I0403 22:26:59.245265 20869 start.go:365] acquiring machines lock for p1-m02: {Name:mk1b31e5e6b618c294c5de45692ff6964f897165 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0403 22:26:59.245372 20869 start.go:369] acquired machines lock for "p1-m02" in 94.144µs
I0403 22:26:59.245391 20869 start.go:93] Provisioning new machine with config: &{Name:p1 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:p1 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.200.200 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/pankaj:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:192.168.200.200 SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}
I0403 22:26:59.245454 20869 start.go:125] createHost starting for "m02" (driver="docker")
I0403 22:26:59.250227 20869 out.go:204] 🔥 Creating docker container (CPUs=2, Memory=2200MB) ...
I0403 22:26:59.250348 20869 start.go:159] libmachine.API.Create for "p1" (driver="docker")
I0403 22:26:59.250356 20869 client.go:168] LocalClient.Create starting
I0403 22:26:59.250393 20869 main.go:141] libmachine: Reading certificate data from /home/pankaj/.minikube/certs/ca.pem
I0403 22:26:59.250433 20869 main.go:141] libmachine: Decoding PEM data...
I0403 22:26:59.250442 20869 main.go:141] libmachine: Parsing certificate...
I0403 22:26:59.250477 20869 main.go:141] libmachine: Reading certificate data from /home/pankaj/.minikube/certs/cert.pem
I0403 22:26:59.250483 20869 main.go:141] libmachine: Decoding PEM data...
I0403 22:26:59.250488 20869 main.go:141] libmachine: Parsing certificate...
I0403 22:26:59.250748 20869 cli_runner.go:164] Run: docker network inspect p1 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0403 22:26:59.341922 20869 network_create.go:77] Found existing network {name:p1 subnet:0xc00302e690 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 200 1] mtu:1500}
I0403 22:26:59.341984 20869 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0403 22:26:59.434910 20869 cli_runner.go:164] Run: docker volume create p1-m02 --label name.minikube.sigs.k8s.io=p1-m02 --label created_by.minikube.sigs.k8s.io=true
I0403 22:26:59.543557 20869 oci.go:103] Successfully created a docker volume p1-m02
I0403 22:26:59.543612 20869 cli_runner.go:164] Run: docker run --rm --name p1-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=p1-m02 --entrypoint /usr/bin/test -v p1-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
I0403 22:27:00.516401 20869 oci.go:107] Successfully prepared a docker volume p1-m02
I0403 22:27:00.516418 20869 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
I0403 22:27:00.516435 20869 kic.go:194] Starting extracting preloaded images to volume ...
I0403 22:27:00.516496 20869 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/pankaj/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v p1-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
I0403 22:27:15.511667 20869 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/pankaj/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v p1-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (14.995141064s)
I0403 22:27:15.511687 20869 kic.go:203] duration metric: took 14.995249 seconds to extract preloaded images to volume
W0403 22:27:15.511781 20869 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0403 22:27:15.511882 20869 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0403 22:27:15.667313 20869 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname p1-m02 --name p1-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=p1-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=p1-m02 --network p1 --ip 192.168.200.200 --volume p1-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
W0403 22:27:15.940386 20869 cli_runner.go:211] docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname p1-m02 --name p1-m02 --label
created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=p1-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=p1-m02 --network p1 --ip 192.168.200.200 --volume p1-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 returned with exit code 125
I0403 22:27:15.940432 20869 client.go:171] LocalClient.Create took 16.690072915s
I0403 22:27:17.940831 20869 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0403 22:27:17.941029 20869 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" p1-m02
W0403 22:27:18.143586 20869 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" p1-m02 returned with exit code 1
I0403 22:27:18.143663 20869 retry.go:31] will retry after 262.127194ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect
a not running container to get SSH port
I0403 22:27:18.406110 20869 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" p1-m02
W0403 22:27:18.557207 20869 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" p1-m02 returned with exit code 1
I0403 22:27:18.557347 20869 retry.go:31] will retry after 509.188485ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect
a not running container to get SSH port
I0403 22:27:19.067497 20869 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" p1-m02
W0403 22:27:19.200550 20869 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" p1-m02 returned with exit code 1
W0403 22:27:19.200635 20869 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to
inspect a not running container to get SSH port
W0403 22:27:19.200642 20869 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh
host-port: unable to inspect a not running container to get SSH port
I0403 22:27:19.200670 20869 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0403 22:27:19.200688 20869 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" p1-m02
W0403 22:27:19.278344 20869 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" p1-m02 returned with exit code 1
I0403 22:27:19.278443 20869 retry.go:31] will retry after 219.870599ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect
a not running container to get SSH port
I0403 22:27:19.498963 20869 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" p1-m02
W0403 22:27:19.590073 20869 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" p1-m02 returned with exit code 1
I0403 22:27:19.590213 20869 retry.go:31] will retry after 326.760118ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect
a not running container to get SSH port
I0403 22:27:19.917660 20869 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" p1-m02
W0403 22:27:20.065029 20869 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" p1-m02 returned with exit code 1
I0403 22:27:20.065126 20869 retry.go:31] will retry after 834.320322ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect
a not running container to get SSH port
I0403 22:27:20.900429 20869 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" p1-m02
W0403 22:27:21.020668 20869 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" p1-m02 returned with exit code 1
W0403 22:27:21.020775 20869 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
W0403 22:27:21.020786 20869 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
I0403 22:27:21.020794 20869 start.go:128] duration metric: createHost completed in 21.775333431s
I0403 22:27:21.020800 20869 start.go:83] releasing machines lock for "p1-m02", held for 21.775422319s
W0403 22:27:21.020811 20869 start.go:691] error starting host: creating host: create: creating: create kic node: create container: can't create with that IP, address already in use
I0403 22:27:21.021088 20869 cli_runner.go:164] Run: docker container inspect p1-m02 --format={{.State.Status}}
W0403 22:27:21.090631 20869 start.go:696] delete host: Docker machine "p1-m02" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
I0403 22:27:21.090664 20869 start.go:701] will skip retrying to create machine because error is not retriable: can't create with that IP, address already in use
I0403 22:27:21.096513 20869 out.go:177]
W0403 22:27:21.102338 20869 out.go:239] ❌ Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: can't create with that IP, address already in use
W0403 22:27:21.102374 20869 out.go:239]
W0403 22:27:21.103404 20869 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ 😿 If the above advice does not help, please let us know: │
│ 👉 https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
I0403 22:27:21.112453 20869 out.go:177]

Operating System

Ubuntu

Driver

Docker

@medyagh
Copy link
Member

medyagh commented Apr 3, 2024

That is a good call ! I dont think we have implemented that feature !
@panktrip would you like to Make a PR that tells the users this feature is not supported for multinode and addtionally this would be a good item to work on for anyone interested

@medyagh medyagh added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Apr 3, 2024
@medyagh medyagh changed the title support static-ip option with multiple node (-n <num> ) Notify the user that support static-ip is not supported with multiple node (-n <num> ) Apr 3, 2024
@medyagh
Copy link
Member

medyagh commented Apr 3, 2024

@prezha recently worked on multi node he could advice on the implementation

@prezha
Copy link
Contributor

prezha commented Apr 28, 2024

i think that notifying user that this is not supported atm is a good first step - eg, in validateStaticIP() func in cmd/minikube/cmd/start.go

possible further improvements:

as we also have the subnet flag, with a description: "Subnet to be used on kic cluster", perhaps that works with multinode (haven't tried myself), so, apart from testing if that'd work for multinode kic-based clusters, we could also explore possibility to make it work for non-kic-based clusters as well, if needed

@wassafshahzad
Copy link

Can i take a look at this

@wassafshahzad
Copy link

/assign

@wassafshahzad
Copy link

@prezha Warning added, PR is up

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants