Skip to content
This repository has been archived by the owner on Apr 29, 2019. It is now read-only.

/vagrant/artifacts/tls/admin.pem: no such file or directory #244

Open
mevam opened this issue May 6, 2017 · 26 comments
Open

/vagrant/artifacts/tls/admin.pem: no such file or directory #244

mevam opened this issue May 6, 2017 · 26 comments

Comments

@mevam
Copy link

mevam commented May 6, 2017

==> master:
==> master: Switched to context "local".
==> master: Remote command execution finished.
==> master: Configuring Kubernetes DNS...
==> master: Executing remote command "/opt/bin/kubectl create -f /home/core/dns-controller.yaml"...
==> master:
==> master:
**```
==> master: Error in configuration:
==> master: * unable to read client-cert /vagrant/artifacts/tls/admin.pem for default-admin due to open /vagrant/artifacts/tls/admin.pem: no such file or directory
==> master: * unable to read client-key /vagrant/artifacts/tls/admin-key.pem for default-admin due to open /vagrant/artifacts/tls/admin-key.pem: no such file or directory
==> master: * unable to read certificate-authority /vagrant/artifacts/tls/ca.pem for default-cluster due to open /vagrant/artifacts/tls/ca.pem: no such file or directory
==> master: Remote command execution finished.
The remote command "/opt/bin/kubectl create -f /home/core/dns-controller.yaml" returned a failed exit
code or an exception. The error output is shown below:

Error in configuration:

  • unable to read client-cert /vagrant/artifacts/tls/admin.pem for default-admin due to open /vagrant/artifacts/tls/admin.pem: no such file or directory
  • unable to read client-key /vagrant/artifacts/tls/admin-key.pem for default-admin due to open /vagrant/artifacts/tls/admin-key.pem: no such file or directory
  • unable to read certificate-authority /vagrant/artifacts/tls/ca.pem for default-cluster due to open /vagrant/artifacts/tls/ca.pem: no such file or directory
@mevam
Copy link
Author

mevam commented May 6, 2017

How should i deal with it? Thank you.

@pires
Copy link
Owner

pires commented May 8, 2017

Are you running this on Windows?

@mevam
Copy link
Author

mevam commented May 10, 2017

yes, your OS?

@pires
Copy link
Owner

pires commented May 10, 2017

MacOS.

Can you repeat the setup and share the entire log here? The TLS artifacts should've been generated when provisioning the master node.

@mevam
Copy link
Author

mevam commented May 10, 2017

{ kubernetes-vagrant-coreos-cluster } master » vagrant up
Bringing machine 'master' up with 'virtualbox' provider...
Bringing machine 'node-01' up with 'virtualbox' provider...
Bringing machine 'node-02' up with 'virtualbox' provider...
==> master: Running triggers before up...
==> master: 2017-05-10 08:30:59 +0100: setting up Kubernetes master...
==> master: Setting Kubernetes version 1.6.2
==> master: Importing base box 'coreos-alpha'...
==> master: Matching MAC address for NAT networking...
==> master: Checking if box 'coreos-alpha' is up to date...
==> master: Setting the name of the VM: kubernetes-vagrant-coreos-cluster_master_1494401473960_5 3633
==> master: Clearing any previously set network interfaces...
==> master: Preparing network interfaces based on configuration...
master: Adapter 1: nat
master: Adapter 2: hostonly
==> master: Forwarding ports...
master: 22 (guest) => 2222 (host) (adapter 1)
==> master: Running 'pre-boot' VM customizations...
==> master: Booting VM...
==> master: Waiting for machine to boot. This may take a few minutes...
master: SSH address: 127.0.0.1:2222
master: SSH username: core
master: SSH auth method: private key
==> master: Machine booted and ready!
==> master: Setting hostname...
==> master: Configuring and enabling network interfaces...
==> master: Exporting NFS shared folders...
==> master: Preparing to edit nfs mounting file.
[NFS] Status: running
==> master: Mounting NFS shared folders...
==> master: Setting time zone...
==> master: Running provisioner: file...
==> master: Running provisioner: file...
==> master: Running provisioner: file...
==> master: Running provisioner: file...
==> master: Running provisioner: file...
==> master: Running provisioner: shell...
master: Running: inline script
==> master: Running provisioner: shell...
master: Running: inline script
==> master: Running provisioner: shell...
master: Running: inline script
==> master: Running provisioner: file...
==> master: Running provisioner: shell...
master: Running: inline script
==> master: Running triggers after up...
==> master: Waiting for Kubernetes master to become ready...
==> master: 2017-05-10 08:41:20 +0100: failed to deploy master within timeout count of 50
==> master: Installing kubectl for the Kubernetes version we just bootstrapped...
==> master: Executing remote command "sudo -u core /bin/sh /home/core/kubectlsetup install"...
==> master: Downloading and installing linux version of 'kubectl' v1.6.2 into /opt/bin. This may take a couple minutes, depending on your internet speed..
==> master: Configuring environment..
==> master: Remote command execution finished.
==> master: Executing remote command "/opt/bin/kubectl config set-cluster default-cluster --serv er=https://172.17.8.101 --certificate-authority=/vagrant/artifacts/tls/ca.pem"...
==> master: Cluster "default-cluster" set.
==> master: Remote command execution finished.
==> master: Executing remote command "/opt/bin/kubectl config set-credentials default-admin --ce rtificate-authority=/vagrant/artifacts/tls/ca.pem --client-key=/vagrant/artifacts/tls/admin-key. pem --client-certificate=/vagrant/artifacts/tls/admin.pem"...
==> master: User "default-admin" set.
==> master: Remote command execution finished.
==> master: Executing remote command "/opt/bin/kubectl config set-context local --cluster=defaul t-cluster --user=default-admin"...
==> master: Context "local" set.
==> master: Remote command execution finished.
==> master: Executing remote command "/opt/bin/kubectl config use-context local"...
==> master: Switched to context "local".
==> master: Remote command execution finished.
==> master: Configuring Kubernetes DNS...
==> master: Executing remote command "/opt/bin/kubectl create -f /home/core/dns-controller.yaml" ...
==> master: Error in configuration:
==> master: * unable to read client-cert /vagrant/artifacts/tls/admin.pem for default-admin due to open /vagrant/artifacts/tls/admin.pem: no such file or directory
==> master: * unable to read client-key /vagrant/artifacts/tls/admin-key.pem for default-admin d ue to open /vagrant/artifacts/tls/admin-key.pem: no such file or directory
==> master: * unable to read certificate-authority /vagrant/artifacts/tls/ca.pem for default-clu ster due to open /vagrant/artifacts/tls/ca.pem: no such file or directory
==> master: Remote command execution finished.
The remote command "/opt/bin/kubectl create -f /home/core/dns-controller.yaml" returned a failed exit
code or an exception. The error output is shown below:

Error in configuration:

  • unable to read client-cert /vagrant/artifacts/tls/admin.pem for default-admin due to open /vag rant/artifacts/tls/admin.pem: no such file or directory
  • unable to read client-key /vagrant/artifacts/tls/admin-key.pem for default-admin due to open / vagrant/artifacts/tls/admin-key.pem: no such file or directory
  • unable to read certificate-authority /vagrant/artifacts/tls/ca.pem for default-cluster due to open /vagrant/artifacts/tls/ca.pem: no such file or directory

@mevam
Copy link
Author

mevam commented May 10, 2017

Do you want these? I hope that my understanding is right.

@pires
Copy link
Owner

pires commented May 10, 2017

The master was not provisioned correctly. Can you log into the machine and check logs?

vagrant ssh master

If some unit failed, it should show up as soon as you log. You can use journalctl to check the logs for that unit.

@prateekrastogi
Copy link

I am having similar issues on windows with exact same error. I then tried login to master and after login the output of create and journalctl commands were

core@master ~ $ ls
dns-controller.yaml dns-service.yaml kubectlsetup
core@master ~ $ kubectl create -f dns-controller.yaml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
core@master ~ $ journalctl
WARNING: terminal is not fully functional
-- Logs begin at Sat 2017-05-13 02:08:26 GMT-5, end at Sat 2017-05-13 03:03:23 GMT-5. --
May 13 02:08:26 localhost kernel: Linux version 4.11.0-coreos (jenkins@worker-1) (gcc version 4.9.4 (Gentoo Hardened 4.9.4 p1.0, pie-0.6.4) ) #1 SMP Wed May 10 22:35:23 UTC 2017
May 13 02:08:26 localhost kernel: Command line: BOOT_IMAGE=/coreos/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrf

May 13 02:08:26 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
May 13 02:08:26 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
May 13 02:08:26 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
May 13 02:08:26 localhost kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256
May 13 02:08:26 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
May 13 02:08:26 localhost kernel: e820: BIOS-provided physical RAM map:
May 13 02:08:26 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
May 13 02:08:26 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
May 13 02:08:26 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
May 13 02:08:26 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ffeffff] usable
May 13 02:08:26 localhost kernel: BIOS-e820: [mem 0x000000003fff0000-0x000000003fffffff] ACPI data
May 13 02:08:26 localhost kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
May 13 02:08:26 localhost kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
May 13 02:08:26 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
May 13 02:08:26 localhost kernel: NX (Execute Disable) protection: active
May 13 02:08:26 localhost kernel: SMBIOS 2.5 present.
May 13 02:08:26 localhost kernel: DMI: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006
May 13 02:08:26 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
May 13 02:08:26 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
May 13 02:08:26 localhost kernel: AGP: No AGP bridge found
May 13 02:08:26 localhost kernel: e820: last_pfn = 0x3fff0 max_arch_pfn = 0x400000000
May 13 02:08:26 localhost kernel: MTRR default type: uncachable
May 13 02:08:26 localhost kernel: MTRR variable ranges disabled:
lines 1-26

@prateekrastogi
Copy link

Also, the deployment is working correctly on windows for the branch tag 1.5.7

@pires
Copy link
Owner

pires commented May 13, 2017

So the apiserver is not running. I need more logs.

@prateekrastogi
Copy link

How can i generate more detailed log?

@kirituo
Copy link

kirituo commented Jun 7, 2017

Did someone found a workaround for this problem?

@kirituo
Copy link

kirituo commented Jun 8, 2017

thats the log i got from the master:
master_log.txt

@TheFausap
Copy link

Hello,
I have the same error. On the master there's IMHO an error in kube-certs.service, the script make-certs.sh is called without any args, so it display an help message and exit.

[Unit]
Description=Generate Kubernetes API Server certificates
ConditionPathExists=/tmp/make-certs.sh
Requires=network-online.target
After=network-online.target
[Service]
ExecStartPre=-/usr/sbin/groupadd -r kube-cert
ExecStartPre=/usr/bin/chmod 755 /tmp/make-certs.sh
ExecStart=/tmp/make-certs.sh
Type=oneshot
RemainAfterExit=true

output of system status kube-certs

Jun 18 08:25:03 master make-certs.sh[2141]: --restricted
Jun 18 08:25:03 master make-certs.sh[2141]: --verbose
Jun 18 08:25:03 master make-certs.sh[2141]: --version
Jun 18 08:25:03 master make-certs.sh[2141]: Shell options:
Jun 18 08:25:03 master make-certs.sh[2141]: -ilrsD or -c command or -O shopt_option (invocation o

Jun 18 08:25:03 master make-certs.sh[2141]: -abefhkmnptuvxBCHP or -o option
Jun 18 08:25:03 master systemd[1]: kube-certs.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Jun 18 08:25:03 master systemd[1]: Failed to start Generate Kubernetes API Server certificates.

@TheFausap
Copy link

TheFausap commented Jun 18, 2017

Found the issue: the two shell scripts (in my case) was still in Windows Format, even if I deleted the git directory, run those git global commands and doing git clone again.. I don't know why.
So i fixed the EOL in notepad++ and now the kube-certs service is OK and the vm creation is continuing without errors.

@pires
Copy link
Owner

pires commented Jun 19, 2017

@TheFausap open PR please.

@TheFausap
Copy link

@pires Sorry but I never did before :-) How can I do that ?

@pires
Copy link
Owner

pires commented Jun 22, 2017

@Sutty100
Copy link

@TheFausap Could you at least update which files had an were malformed? I am facing the same issue

@TheFausap
Copy link

Sorry for the delay, today I will open the PR.

@pezhore
Copy link

pezhore commented Aug 15, 2017

@TheFausap Any word on that PR? Or the files that were malformed?

@munai-das
Copy link

munai-das commented Sep 20, 2017

Hi,

I am too facing the issue for the latest release tag 1.7.4

@munai-das
Copy link

This resolved for me after I uninstalled minikube.
Others please check and confirm this.

@yh1224
Copy link

yh1224 commented Oct 1, 2017

This might caused by make-certs.sh fails.
I noticed my tls/make-certs-*.sh files line ending with CRLF.
Even I configure core.autocrlf to false.

Line ending conversion is forced by .gitattributes below.

* text=auto

https://git-scm.com/docs/gitattributes

When text is set to "auto", the path is marked for automatic end-of-line conversion. If Git decides that the content is text, its line endings are converted to LF on checkin. When the file has been committed with CRLF, no conversion is done.

@lujianmei
Copy link

lujianmei commented Dec 6, 2017

Hi, I got the same issue, as you said, i tried to run :
Vagrant ssh master, it shows me as following:
On MacOS Hight Seris

$ vagrant ssh master
Last login: Wed Dec 6 19:43:38 GMT-8 2017 from 10.0.2.2 on ssh
Container Linux by CoreOS stable (1520.9.0)
Failed Units: 1
user-cloudinit@var-lib-coreos\x2dvagrant-vagrantfile\x2duser\x2ddata.service
core@master ~ $

after i use journal, following error has been found:

Dec 06 20:00:19 master sshd[2014]: pam_unix(sshd:session): session opened for user core by (uid=0)
Dec 06 20:00:19 master systemd[1]: Created slice User Slice of core.
Dec 06 20:00:19 master systemd[1]: Starting User Manager for UID 500...
Dec 06 20:00:19 master systemd-logind[731]: New session 3 of user core.
Dec 06 20:00:19 master systemd[1]: Started Session 3 of user core.
Dec 06 20:00:19 master systemd[2016]: pam_unix(systemd-user:session): session opened for user core by (uid=0)
Dec 06 20:00:19 master systemd[2016]: Reached target Paths.
Dec 06 20:00:19 master systemd[2016]: Reached target Timers.
Dec 06 20:00:19 master systemd[2016]: Reached target Sockets.
Dec 06 20:00:19 master systemd[2016]: Reached target Basic System.
Dec 06 20:00:19 master systemd[2016]: Reached target Default.
Dec 06 20:00:19 master systemd[2016]: Startup finished in 27ms.
Dec 06 20:00:19 master systemd[1]: Started User Manager for UID 500.
Dec 06 20:02:44 master locksmithd[735]: [etcd.service etcd2.service] are inactive
Dec 06 20:02:44 master locksmithd[735]: Unlocking old locks failed: [etcd.service etcd2.service] are inactive. Retrying in 5m0s.
Dec 06 20:07:44 master locksmithd[735]: [etcd.service etcd2.service] are inactive
Dec 06 20:07:44 master locksmithd[735]: Unlocking old locks failed: [etcd.service etcd2.service] are inactive. Retrying in 5m0s.
lines 2198-2216/2216 (END)

How can i solve this? Thank you.

@lujianmei
Copy link

As i described above, i used the stable coreos channel, but after i change to alpha, it is ok now. I am not sure what was wrong.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants