Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CAPI providers work behind an HTTP(S) proxy #1424

Closed
6 of 8 tasks
Tracked by #1125
glitchcrab opened this issue Aug 8, 2022 · 18 comments
Closed
6 of 8 tasks
Tracked by #1125

CAPI providers work behind an HTTP(S) proxy #1424

glitchcrab opened this issue Aug 8, 2022 · 18 comments
Assignees
Labels
area/kaas Mission: Cloud Native Platform - Self-driving Kubernetes as a Service kind/cross-team Epics that span across teams provider/cloud-director provider/cluster-api-aws Cluster API based running on AWS provider/openstack Related to provider OpenStack team/rocket Team Rocket topic/capi

Comments

@glitchcrab
Copy link
Member

glitchcrab commented Aug 8, 2022

User Story

  • As a giant swarm customer on CAP(VCD|V|O..) I want my environment to use a central HTTP(S) Proxy so that I have a central point to block traffic if I need to.

Tasks

in Progress

open

done

Background:

The new environment will have to be behind a proxy - this is a hard requirement from their security team. It is currently a transparent proxy but will be changed to authenticated in the near future.

Team rocket already did this for kvm/legacy . Here is documented how https://github.com/giantswarm/giantswarm/issues/17226#issuecomment-1226904904.

Related stories

@JosephSalisbury JosephSalisbury added the team/rocket Team Rocket label Aug 10, 2022
@JosephSalisbury
Copy link
Contributor

We need to work out what we need to do / what the current status is

@JosephSalisbury
Copy link
Contributor

blocked by https://github.com/giantswarm/giantswarm/issues/17226, which documents status quo

@cornelius-keller cornelius-keller changed the title Telekom CAPVCD env will use an HTTP(S) proxy CAPI providers work beheind an HTTP(S) proxy Sep 21, 2022
@cornelius-keller cornelius-keller changed the title CAPI providers work beheind an HTTP(S) proxy CAPI providers work behind an HTTP(S) proxy Sep 21, 2022
@cornelius-keller cornelius-keller transferred this issue from another repository Sep 21, 2022
@cornelius-keller cornelius-keller transferred this issue from another repository Sep 21, 2022
@cornelius-keller cornelius-keller transferred this issue from giantswarm/roadmap Sep 21, 2022
@cornelius-keller cornelius-keller transferred this issue from another repository Sep 21, 2022
@cornelius-keller cornelius-keller moved this to Near Term (1-3 months) in Roadmap Sep 21, 2022
@alex-dabija alex-dabija added area/kaas Mission: Cloud Native Platform - Self-driving Kubernetes as a Service topic/capi provider/cluster-api-aws Cluster API based running on AWS provider/cloud-director labels Sep 21, 2022
@cornelius-keller cornelius-keller added the provider/openstack Related to provider OpenStack label Sep 22, 2022
@kopiczko
Copy link

kopiczko commented Oct 4, 2022

There are two different approaches:

  1. Using kyverno to inject proxy env vars.
  2. Converting all our apps to support proxy settings (in values.yaml)

Solution 2. has the advantage of not using kyverno and ability to do some very app specific in case it's needed but requires way more work.

@cornelius-keller
Copy link
Contributor

@alex-dabija @puja108 from discussion with @giantswarm/team-rocket the question what should be the user-experience for the customer came up.
As team-rocket we would prefer to configure the web-hooks to only touch Giantswarm Workloads and Apps, and don't touch customer workloads.
Do we have any expections on the customer side? If we leave the customer workloads alone the customer would need to either configure all there apps to be proxy aware, or have its own mutating web-hook.

@cornelius-keller
Copy link
Contributor

@kopiczko @bavarianbidi how would we distinct customer workloads from managed apps installed in customer name spaces?

@alex-dabija
Copy link

Personally, I think it's fine to focus only on our apps for now. I don't yet know if our customer would be fine though.

@kopiczko
Copy link

kopiczko commented Oct 5, 2022

@kopiczko @bavarianbidi how would we distinct customer workloads from managed apps installed in customer name spaces?

Do we keep any workloads outside giantswarm, kube-system and flux* namespaces?

@kopiczko
Copy link

kopiczko commented Oct 5, 2022

BTW there is also a chicken-egg problem because flux and app platform are installed well before kyverno so they will need to support proxy mode without webhooks. Same applies to kyverno itself possibly (if it needs internet for anything).

@bavarianbidi
Copy link

As kyverno is currently deployed in namespace giantswarm, it's not possible to create a mutating config which affects pods in namespace giantswarm.

The future goal in general would be moving kyverno out of namespace/giantswarm, but this isn't implemented yet:

@bavarianbidi
Copy link

Current status:

With the below make create-mc target in mc-bootstrap (based on mc-bootstrap PR #264) it's possible to create a workload-cluster behind a corporate proxy.

INSTALLATION=gario CUSTOMER=GS PROVIDER=cloud-director NEW_TEST_INSTALLATION=1 EDITOR=vim MC_PROXY_ENABLED=true MC_HTTP_PROXY="http://giantswarm:[email protected]:3128" MC_HTTPS_PROXY="http://giantswarm:[email protected]:3128" MC_NO_PROXY="vmware.ikoula.com" make create-mc

There are currently two "limitations":

  • CAPVCD: due an issue in capvcd i have to edit the status of vcdcluster during clusterctl move phase and add existing RDE_ID (from initial cluster creation) there (to keep track of that, issue #24517 got created)
  • kyverno-policies-connectivity: once the cluster got created, installations/<INSTALLATION_NAME>/apps/kyverno-policies-connectivity/configmap-values.yaml.patch in config must be updated to set the proper noProxy list for this cluster. This can be improved in two different ways:
    • kyverno-policies-connectivity could take care of cluster-values --> open topic in the rfc discussion
    • add an additional step in mc-bootstrap where we update the already created config in config-repo and write the cluster-specific noProxy values there.

    NOTE: as we apply the kyverno-policies-connectivity in a very early stage with the right noProxy configuration it's fine during the initial cluster creation. Only before the app-platform takes over the configuration of this app, we have to updated the configuration in config repo

A more detailed view/description of the current implementation can be found in this RFC where still some open questions should be discussed.

@kopiczko
Copy link

add an additional step in mc-bootstrap where we update the already created config in config-repo and write the cluster-specific noProxy values there.

Do we need to configure the config with the extra values? Can't kyverno read those extra values directly from cluster object in the cluster?

@bavarianbidi
Copy link

Do we need to configure the config with the extra values? Can't kyverno read those extra values directly from cluster object in the cluster?

Yes, as kyverno-policy-connectivity will lead to a kyverno cluster policy which will inject the http_proxy, https_proxy and no_proxy variable to all pods (outside namespace/kube-system) with the proxy configuration all pods need in this MC.

As the kyverno-policy-connectivity App will get deployed as Kind: Konfigure we have to make use of the cluster-secret there.
(Please see this comment in the proxy-RFC about the issue)

@kopiczko
Copy link

It looks like kyverno policies are able to dynamically read values from arbitrary kubernetes objects https://kyverno.io/docs/writing-policies/external-data-sources/#variables-from-kubernetes-api-server-calls.

We are/were even using this ourselves https://github.com/giantswarm/kyverno-policies/blob/f7e3ed0c0e9e917ed5b3495e74b7da604831dad2/policies/aws/AWSCAPI.yaml#L49-L57

It is also possible to use the context value in the mutate: part of the policy

@bavarianbidi
Copy link

Good catch! But this implementation is limited to management clusters (as Workload clusters don't have the cluster specific CR in place).

Theoretically it should be possible to extend the existing kyverno-policy-connectivity chart to use the values from cluster.proxy.noProxy, proxy.noProxy or dynamic discovered values via api server calls.

@gawertm gawertm added the kind/cross-team Epics that span across teams label Nov 23, 2022
@gawertm gawertm moved this from Near Term (1-3 months) to Ready Soon (<4 weeks) in Roadmap Nov 23, 2022
@erkanerol
Copy link

erkanerol commented Nov 29, 2022

While creating guppy by enabling proxy and blocking all internet access, I hit the issues here #1424 (comment) and 3 more issues.

UPDATE: Fixed with giantswarm/cluster-apps-operator#316

UPDATE: Fixed with giantswarm/cluster-cloud-director#57

  • silence-operator-sync jobs don't work since env variables in the init container are missing.

UPDATE: Resolved when the version of kyverno-policies are updated.

@erkanerol
Copy link

guppy is up&running behind a proxy without direct internet access. The only issue is the dynamic kyverno policy as discussed above. I am going to try to implement it.

@erkanerol
Copy link

Dynamic kyverno policy will be supported with this PR
giantswarm/kyverno-policies-connectivity#38

@erkanerol
Copy link

CAPA and CAPVCD work fine with proxy. Dynamic kyverno policy is implemented. We decided to handle CAPO with a separate issue. #1783

Repository owner moved this from Ready Soon (<4 weeks) to Released in Roadmap Dec 13, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kaas Mission: Cloud Native Platform - Self-driving Kubernetes as a Service kind/cross-team Epics that span across teams provider/cloud-director provider/cluster-api-aws Cluster API based running on AWS provider/openstack Related to provider OpenStack team/rocket Team Rocket topic/capi
Projects
Archived in project
Development

No branches or pull requests

8 participants