-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extend internal GCP Shoot cluster capabilities #56
Comments
Thanks @dkistner. A few suggestions:
|
For 2. and 3. does that not clash with the gardener-resource-manager as there would be a diff in the resource definition, which then will be reverted, again mutated and so on... I think on Gardener managed resources like e.g. the |
The gardener-resource-manager applies its desired state every minute, it doesn't check for any actual state and then computes a diff - it simply applies. Hence, any webhook can freely mutate the object as desired. This also allows extensions to implement shoot webhooks. |
Hi @rfranzke, our setup is the basis of what @dkistner described. Esssentially, I agree that this could certainly be put into the hand of the user. I think it is more a question of what benefit has Gardener from (not offering) it, compared to how much strain does it put on the user. In our team the Kubernetes knowledge is quite high, however we had to spend multiple weeks figuering out what is necessary and how the final steup should look (especially in regards to the .internal.localAccess flag, which translates to whether or not global access, i.e. cross zone traffic, is allowed on the LB). Additionally, from a security point of view, it would give myself some peace of mind knowing that there is no way a user can create an external load balancer. Yes, I could implement it with OPA or pure webhooks, but I still feel it would be more straight forward to offer it from an infrastructure point. |
From my point of view this sounds more like an add-on on top of ANY existing cluster - there is nothing Gardener-specific in it. What do mean with "strain"? Deploying the webhook into the cluster?
Can you explain more, maybe I haven't understood it completely. From what I got from @dkistner's above description the only thing that needs to be done is registering a mutating webhook for
We/Gardener wouldn't do it any differently. We would also just register a mutating webhook in the shoot. I am not entirely against it, no worries, I'm just afraid that similar special use-cases reach us/our backlog. If the pull is high enough that's totally fine, but I've never heard of such a requirement from anybody else in the past years, so I guess it's very special to your setup. |
@dkistner how will we solve the API-server to VPN connectivity issue, does this mean we will have to provision a dedicated seed in that same network and have every shoot with internalOnly scheduled on that seed? I guess this issue maybe makes sense to realize when we invert the VPN connection direction? |
I think yes, at least as long we haven’t inverted the direction of the VPN. Are there any plans to work on this? |
this as in the inversion of the VPN or this feature? If you mean this features, I would agree with what you said, if there is enough hustle on it, we can start working on it. If you mean the VPN inversion, yes, but not in the very near future. |
No no, I meant the VPN inversion. Alright, thanks. |
Yes that is needed until the vpn connection is not initialised by the Shoot. How about structuring the internal section this way?
In case the cluster should not be explicitly internal only we can simply omit the |
@dkistner but we need some means to identify that this shoot should be an internal shoot and thus scheduled to the internal seed, or? |
Hi, wouldn't it make sense to name the |
What could be an indicator for that? I think currently the user need to know which Seed in networking wise reachable and need to pin it via
I'm ok with more explicit names. A user can either specify |
What would you like to be added:
The Gardener GCP provider allow already to pass configuration for services of type LoadBalancer which are backed by internal load balancers. The user can specify in the infra config via
.internal
a cidr range within the vpc which should be used to pick an ip addresses for the internal load balancer service.The extension will create a subnet in the vpc with the
.internal
cidr.I propose to extend this approach to allow users to specify an existing subnet, which can be as well used to pick ip addresses for internal load balancer services.
In addition it could also make sense to deploy Shoot cluster only with internal load balancer services as there could be scenarios which require that for security/isolation reasons (those scenarios would of course require that the control plane hosting seed can access these enviroments).
The
InfrastructureConfig
could look like this:Either
.internal.cidr
or.internal.subnet
can be specified.The
.internal.internalOnly
flag specify that all load balancer services in the cluster need to be internal ones (includingvpn-shoot
). That can be enforces and/or validated via webhooks.The
.internal.localAccess
flag allow to specify could be used to limit the access to the internal load balancers only within vpc.The following annotation on the services need to be set:
The annotation
networking.gke.io/internal-load-balancer-subnet
is currently available as alpha feature.To enable it on the cloud provider config passed to the GCP cloud-controller-manager need to contain
alpha-features="ILBCustomSubnet"
.Why is this needed:
There are scenarios where user need to create upfront a vpc with a subnet inside which is routable in other context e.g. internal networks etc.
cc @DockToFuture
The text was updated successfully, but these errors were encountered: