Skip to content
This repository has been archived by the owner on Apr 29, 2019. It is now read-only.

Support the Kubernetes Metrics Server #319

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

abcdabcd987
Copy link
Contributor

First of all, thank you Mr. Pires, for this very handy Vagrant project.

I added the support of the Kubernetes Metrics Server. plugins/metrics-server/ is from metrics-server/deploy/1.8+/. And I have to make some changes to the manifests/master-apiserver.yaml according to this comment.

I've tested it on Mac. But I'm not sure if it works on Windows. And also not sure about RBAC.

@bmcustodio
Copy link
Collaborator

@abcdabcd987 thank you for sending this PR! Following the merge of #320, it now has conflicts which must be resolved. Could you please resolve them?

@bmcustodio bmcustodio self-requested a review October 27, 2018 17:46
@bmcustodio bmcustodio self-assigned this Oct 27, 2018
@abcdabcd987
Copy link
Contributor Author

resolved.

@pires
Copy link
Owner

pires commented Oct 28, 2018

Thanks a lot for your kind words and contributions, @abcdabcd987.

The commit log shows that there is a merge commit. Can you rebase instead, so it's just one clean commit?

@abcdabcd987 abcdabcd987 force-pushed the metrics-server branch 2 times, most recently from 45949ce to 34e9433 Compare October 28, 2018 17:49
@abcdabcd987
Copy link
Contributor Author

rebased.

Copy link
Owner

@pires pires left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think @abcdabcd987 @bmcstdio ?

@@ -146,6 +146,10 @@ Most aspects of your cluster setup can be customized with environment variables.

Defaults to `false`.

- **USE_METRICS_SERVER** defines whether to deploy or not the [Kubernetes Metrics Server](https://github.com/kubernetes-incubator/metrics-server)

Defaults to `false`.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Defaults to `false`.
Defaults to `true`.

This is required for the Horizontal Pod Autoscaler to work.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say agree because HPA is my motivation for adding this feature. The reason why I put it to false by default is that I haven't tested it in Windows nor RBAC (and I probably won't because I don't have this kind of setup).

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm sure @bmcstdio can help here, right Bruno?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely. However, it seems to me that this PR needs some love. I checked out this branch and ran the following command:

$ NODES=1 USE_METRICS_SERVER=true vagrant up

Turns out the Kubernetes API can never be reachable:

$ kubectl get node
Unable to connect to the server: net/http: TLS handshake timeout

This does not happen in USE_METRICS_SERVER is set to false.

I also noticed high load on the system - far higher than usual. While I am not sure, part of what the root cause for this may be the fact that the following flag is not being defined:

--enable-aggregator-routing=true

https://kubernetes.io/docs/tasks/access-kubernetes-api/configure-aggregation-layer/#enable-apiserver-flags

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, definitely need more tests. I only tested it on macOS. Also, good catch on --enable-aggregator-routing=true.

@bmcustodio bmcustodio removed their assignment Jul 23, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants