Skip to content
This repository has been archived by the owner on May 29, 2020. It is now read-only.

Stripped box build process stuck at kube-dns bring-up stage #12

Open
kimikowang opened this issue Apr 2, 2018 · 7 comments
Open

Stripped box build process stuck at kube-dns bring-up stage #12

kimikowang opened this issue Apr 2, 2018 · 7 comments

Comments

@kimikowang
Copy link

I ran the make process on MacOS(10.13.2) with Vagrant 2.0.3 and VirtualBox 5.2.8 while encountered the following problem:

==> default: Running provisioner: shell...
    default: Running: /var/folders/s2/hfrfw1655_sftw0yv1f9nbx40000gn/T/vagrant-shell20180401-68633-19hb11c.sh
    default: Waiting for kube-dns to show up
    default: The connection to the server localhost:8080 was refused - did you specify the right host or port?
    default: .
    default: The connection to the server localhost:8080 was refused - did you specify the right host or port?
    default: .
    default: .
    default: .
    ...

It is stuck at waiting for kube-dns to show up.

Observing that the repartition script was last updated 6 months ago with tube version bumped to 1.8.0, while currently the default setting in stage 00 is v1.10.0, maybe there are some possible gaps there?

@JimmyCYJ
Copy link

JimmyCYJ commented Apr 2, 2018

I have the same issue here. The full terminal output can be found at https://gist.github.com/JimmyCYJ/e51cf59f5333c81d48245053586f2e56

I am using Ubuntu 16.04, vagrant 2.0.3, virtualbox 5.2.8r121009.
Running $sudo kubectl get pods returns this error message.
"The connection to the server localhost:8080 was refused - did you specify the right host or port?"

@hanikesn Do you have any idea about this?

@hanikesn
Copy link
Contributor

hanikesn commented Apr 2, 2018

Do the prebuilt boxes via vagrant init flixtech/kubernetes; vagrant up --provider virtualbox work for you?
Posting the output of vagrant ssh sudo journactl, vagrant ssh sudo systemctl status and vagrant ssh docker ps -a and kubect get po --all-namespaces -a will help me with debugging.

@JimmyCYJ
Copy link

JimmyCYJ commented Apr 2, 2018

After running
vagrant init flixtech/kubernetes
and
sudo vagrant up --provider virtualbox

$ sudo make
test -f cloned.vdi && vboxmanage closemedium disk cloned.vdi --delete || true
VBoxManage clonehd "/root/VirtualBox VMs/vagrant-kubernetes-1100_default_1522706860473_14921/box-disk001.vmdk" cloned.vdi --format vdi
VBoxManage: error: Failed to lock source media '/root/VirtualBox VMs/vagrant-kubernetes-1100_default_1522706860473_14921/box-disk001.vmdk'
VBoxManage: error: Details: code VBOX_E_INVALID_OBJECT_STATE (0x80bb0007), component MediumWrap, interface IMedium, callee nsISupports
VBoxManage: error: Context: "CloneTo(pDstMedium, ComSafeArrayAsInParam(l_variants), NULL, pProgress.asOutParam())" at line 987 of file VBoxManageDisk.cpp
Makefile:20: recipe for target '.vagrant/repartinioned' failed
make: *** [.vagrant/repartinioned] Error 1

@hanikesn
Copy link
Contributor

hanikesn commented Apr 2, 2018

@JimmyCYJ You don't need to run make anymore when using the vagrant init flixtech/kubernetes. Use those commands in a fresh directory and after adding the route sudo route -n add 10.0.0.0/24 10.10.0.2 you can access the dashboard (10.0.0.3) and other clusters services directly via their ip address from your host.

@kimikowang
Copy link
Author

@hanikesn Yes the vagrant init flixtech/kubernetes; vagrant up --provider virtualbox works for me on Mac. I can now get access to the vm and see k8s running there. Thanks!

@JimmyCYJ
Copy link

JimmyCYJ commented Apr 2, 2018

@hanikesn thanks a lot for your help!
$vagrant init flixtech/kubernetes and
$sudo vagrant up --provider virtualbox
works.
I have also run $sudo ip route add 10.0.0.0/24 via 10.10.0.2

Question: How to access to the dashboard (10.0.0.3)? I open a browser and input https://10.0.0.3/ui, and the dashboard does not show.

Below is the status
1.
$ sudo vagrant status
Current machine states:
default running (virtualbox)

  1. $sudo vagrant ssh
    $ journalctl
    -- Logs begin at Wed 2018-03-28 12:20:49 GMT, end at Mon 2018-04-02 22:30:34 GMT. --
    Mar 28 12:20:49 contrib-stretch kernel: Linux version 4.9.0-6-amd64 ([email protected]) (gcc version 6.
    Mar 28 12:20:49 contrib-stretch kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-4.9.0-6-amd64 root=UUID=111b7e66-ca88
    Mar 28 12:20:49 contrib-stretch kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
    Mar 28 12:20:49 contrib-stretch kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
    Mar 28 12:20:49 contrib-stretch kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
    Mar 28 12:20:49 contrib-stretch kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256
    Mar 28 12:20:49 contrib-stretch kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'stan
    Mar 28 12:20:49 contrib-stretch kernel: x86/fpu: Using 'eager' FPU context switches.
    Mar 28 12:20:49 contrib-stretch kernel: e820: BIOS-provided physical RAM map:
    Mar 28 12:20:49 contrib-stretch kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
    Mar 28 12:20:49 contrib-stretch kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
    Mar 28 12:20:49 contrib-stretch kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
    Mar 28 12:20:49 contrib-stretch kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ffeffff] usable
    Mar 28 12:20:49 contrib-stretch kernel: BIOS-e820: [mem 0x000000003fff0000-0x000000003fffffff] ACPI data
    Mar 28 12:20:49 contrib-stretch kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
    Mar 28 12:20:49 contrib-stretch kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
    Mar 28 12:20:49 contrib-stretch kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
    Mar 28 12:20:49 contrib-stretch kernel: NX (Execute Disable) protection: active
    Mar 28 12:20:49 contrib-stretch kernel: SMBIOS 2.5 present.
    Mar 28 12:20:49 contrib-stretch kernel: DMI: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006
    Mar 28 12:20:49 contrib-stretch kernel: Hypervisor detected: KVM
    Mar 28 12:20:49 contrib-stretch kernel: Kernel/User page tables isolation: disabled
    Mar 28 12:20:49 contrib-stretch kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
    Mar 28 12:20:49 contrib-stretch kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
    Mar 28 12:20:49 contrib-stretch kernel: e820: last_pfn = 0x3fff0 max_arch_pfn = 0x400000000
    Mar 28 12:20:49 contrib-stretch kernel: MTRR default type: uncachable
    Mar 28 12:20:49 contrib-stretch kernel: MTRR variable ranges disabled:
    Mar 28 12:20:49 contrib-stretch kernel: MTRR: Disabled
    Mar 28 12:20:49 contrib-stretch kernel: x86/PAT: MTRRs disabled, skipping PAT initialization too.
    Mar 28 12:20:49 contrib-stretch kernel: CPU MTRRs all blank - virtualized system.
    Mar 28 12:20:49 contrib-stretch kernel: x86/PAT: Configuration [0-7]: WB WT UC- UC WB WT UC- UC
    Mar 28 12:20:49 contrib-stretch kernel: found SMP MP-table at [mem 0x0009fff0-0x0009ffff] mapped at [ffff997f4009fff
    Mar 28 12:20:49 contrib-stretch kernel: Base memory trampoline at [ffff997f40099000] 99000 size 24576
    Mar 28 12:20:49 contrib-stretch kernel: BRK [0x30739000, 0x30739fff] PGTABLE
    Mar 28 12:20:49 contrib-stretch kernel: BRK [0x3073a000, 0x3073afff] PGTABLE
    Mar 28 12:20:49 contrib-stretch kernel: BRK [0x3073b000, 0x3073bfff] PGTABLE
    Mar 28 12:20:49 contrib-stretch kernel: BRK [0x3073c000, 0x3073cfff] PGTABLE
    Mar 28 12:20:49 contrib-stretch kernel: BRK [0x3073d000, 0x3073dfff] PGTABLE
    Mar 28 12:20:49 contrib-stretch kernel: RAMDISK: [mem 0x35d5f000-0x36ea6fff]
    Mar 28 12:20:49 contrib-stretch kernel: ACPI: Early table checksum verification disabled
    Mar 28 12:20:49 contrib-stretch kernel: ACPI: RSDP 0x00000000000E0000 000024 (v02 VBOX )
    Mar 28 12:20:49 contrib-stretch kernel: ACPI: XSDT 0x000000003FFF0030 00003C (v01 VBOX VBOXXSDT 00000001 ASL 0000
    Mar 28 12:20:49 contrib-stretch kernel: ACPI: FACP 0x000000003FFF00F0 0000F4 (v04 VBOX VBOXFACP 00000001 ASL 0000
    Mar 28 12:20:49 contrib-stretch kernel: ACPI: DSDT 0x000000003FFF0470 0021FF (v02 VBOX VBOXBIOS 00000002 INTL 2018
    Mar 28 12:20:49 contrib-stretch kernel: ACPI: FACS 0x000000003FFF0200 000040
    Mar 28 12:20:49 contrib-stretch kernel: ACPI: FACS 0x000000003FFF0200 000040
    Mar 28 12:20:49 contrib-stretch kernel: ACPI: APIC 0x000000003FFF0240 00005C (v02 VBOX VBOXAPIC 00000001 ASL 0000
    Mar 28 12:20:49 contrib-stretch kernel: ACPI: SSDT 0x000000003FFF02A0 0001CC (v01 VBOX VBOXCPUT 00000002 INTL 2018
    Mar 28 12:20:49 contrib-stretch kernel: ACPI: Local APIC address 0xfee00000
    Mar 28 12:20:49 contrib-stretch kernel: No NUMA configuration found
    Mar 28 12:20:49 contrib-stretch kernel: Faking a node at [mem 0x0000000000000000-0x000000003ffeffff]
    Mar 28 12:20:49 contrib-stretch kernel: NODE_DATA(0) allocated [mem 0x3ffeb000-0x3ffeffff]
    Mar 28 12:20:49 contrib-stretch kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
    Mar 28 12:20:49 contrib-stretch kernel: kvm-clock: cpu 0, msr 0:3ffe3001, primary cpu clock
    Mar 28 12:20:49 contrib-stretch kernel: kvm-clock: using sched offset of 4299442129 cycles
    Mar 28 12:20:49 contrib-stretch kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb,
    Mar 28 12:20:49 contrib-stretch kernel: Zone ranges:
    Mar 28 12:20:49 contrib-stretch kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff]
    Mar 28 12:20:49 contrib-stretch kernel: DMA32 [mem 0x0000000001000000-0x000000003ffeffff]
    Mar 28 12:20:49 contrib-stretch kernel: Normal empty
    Mar 28 12:20:49 contrib-stretch kernel: Device empty
    Mar 28 12:20:49 contrib-stretch kernel: Movable zone start for each node
    Mar 28 12:20:49 contrib-stretch kernel: Early memory node ranges
    Mar 28 12:20:49 contrib-stretch kernel: node 0: [mem 0x0000000000001000-0x000000000009efff]
    Mar 28 12:20:49 contrib-stretch kernel: node 0: [mem 0x0000000000100000-0x000000003ffeffff]
    Mar 28 12:20:49 contrib-stretch kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000003ffeffff]
    Mar 28 12:20:49 contrib-stretch kernel: On node 0 totalpages: 262030
    Mar 28 12:20:49 contrib-stretch kernel: DMA zone: 64 pages used for memmap
    Mar 28 12:20:49 contrib-stretch kernel: DMA zone: 21 pages reserved
    Mar 28 12:20:49 contrib-stretch kernel: DMA zone: 3998 pages, LIFO batch:0
    Mar 28 12:20:49 contrib-stretch kernel: DMA32 zone: 4032 pages used for memmap
    Mar 28 12:20:49 contrib-stretch kernel: DMA32 zone: 258032 pages, LIFO batch:31

  2. $ systemctl status
    ● contrib-stretch
    State: running
    Jobs: 0 queued
    Failed: 0 units
    Since: Mon 2018-04-02 22:17:08 GMT; 14min ago
    CGroup: /
    ├─user.slice
    │ └─user-1000.slice
    │ ├─session-9.scope
    │ │ ├─2689 sshd: vagrant [priv]
    │ │ ├─2698 sshd: vagrant@pts/0
    │ │ ├─2699 -bash
    │ │ ├─2777 systemctl status
    │ │ └─2778 pager
    │ └─[email protected]
    │ └─init.scope
    │ ├─2691 /lib/systemd/systemd --user
    │ └─2692 (sd-pam)
    ├─kube-proxy
    │ └─529 /usr/bin/hyperkube proxy --master=127.0.0.1:8080
    ├─init.scope
    │ └─1 /sbin/init
    ├─kubepods
    │ ├─besteffort
    │ │ ├─podacf85acb-3282-11e8-ab61-0800278dc04d
    │ │ │ ├─ce48a72309056b78398ec2bd09925fa2dd77c5399dbb919980afff178d8a5665
    │ │ │ │ └─1211 /pause
    │ │ │ └─a2a861c13a0edeced54e74dbe9b470e107888c7744a1e50eac6d50a8ccec947a
    │ │ │ └─1411 /k8s-mdns --logtostderr
    │ │ └─podace11f1c-3282-11e8-ab61-0800278dc04d
    │ │ ├─366be92eae58112e7b8827a83d50920319dd7c87160cc1daeae4eb9e637d4c26
    │ │ │ └─1487 /dashboard --insecure-bind-address=0.0.0.0 --bind-address=0.0.0.0
    │ │ └─ef8c5e6a9fd29d83961f4048ba8c0090923de32b363769b051d8a313df2a93ac
    │ │ └─1297 /pause
    │ └─burstable
    │ └─podacaeaacc-3282-11e8-ab61-0800278dc04d
    │ ├─b40785f605383d8949caa633ebe35ce82b049db61fbb974ff1f5c2ebd1c115bb
    │ │ └─1585 /sidecar --v=0 --logtostderr --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.k8s.lo
    │ ├─db6b831696eecb2bcde1bd48887f948a1c4d235155f07a1d0906fe8d289c2517
    │ │ └─1131 /pause
    │ ├─87b81d42ae55dd2233b9379253dbc171278863d58db7ebf5b9c69c10a131b8a7
    │ │ ├─1493 /dnsmasq-nanny -v=0 -logtostderr -configDir=/etc/k8s/dns/dnsmasq-nanny -restartDnsmasq=tru
    │ │ └─1552 /usr/sbin/dnsmasq -k --cache-size=1000 --log-facility=- --server=/k8s.local/127.0.0.1#1005
    │ └─93916f042f91249771ca0f8d31c6f6529c35cc9bb453afa3b7cc773523f6ad76
    │ └─1329 /kube-dns --domain=k8s.local. --dns-port=10053 --config-dir=/kube-dns-config --v=0
    └─system.slice
    ├─kube-etcd.service
    │ └─533 /usr/bin/etcd --data-dir=/var/etcd/data
    ├─dbus.service
    │ └─534 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
    ├─docker.service
    │ ├─ 652 /usr/bin/dockerd -H fd://
    │ ├─ 683 docker-containerd --config /var/run/docker/containerd/containerd.toml
    │ ├─1110 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.container
    │ ├─1193 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.container
    │ ├─1282 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.container
    │ ├─1311 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.container
    │ ├─1396 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.container
    │ ├─1456 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.container
    │ ├─1469 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.container
    │ └─1571 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.container
    ├─kubelet.service
    │ └─859 /usr/bin/hyperkube kubelet --logtostderr=true --kubeconfig=/etc/kubeconfig.yml --hostname-overr
    ├─ssh.service
    │ └─656 /usr/sbin/sshd -D
    ├─[email protected]
    │ └─592 /sbin/dhclient -4 -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases -I -df /
    ├─system-getty.slice
    │ └─[email protected]
    │ └─659 /sbin/agetty --noclear tty1 linux
    ├─kube-apiserver.service
    │ └─730 /usr/bin/hyperkube apiserver --service-cluster-ip-range=10.0.0.0/24 --cert-dir=/var/lib/kuberne
    ├─www\x2ddata.mount

@JimmyCYJ
Copy link

JimmyCYJ commented Apr 2, 2018

$ kubectl get po --all-namespaces -a
Flag --show-all has been deprecated, will be removed in an upcoming release
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system k8s-mdns-55695877fd-wck9f 1/1 Running 2 5d
kube-system kube-dns-6f997bddb6-h7n4k 3/3 Running 5 5d
kube-system kubernetes-dashboard-7f7c874bc5-bkrcf 1/1 Running 2 5d

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants