In {product-title} version {product-version}, you can install a cluster into a shared Virtual Private Cloud (VPC) on Google Cloud Platform (GCP) that uses infrastructure that you provide. In this context, a cluster installed into a shared VPC is a cluster that is configured to use a VPC from a project different from where the cluster is being deployed.
A shared VPC enables an organization to connect resources from multiple projects to a common VPC network. You can communicate within the organization securely and efficiently by using internal IPs from that network. For more information about shared VPC, see Shared VPC overview in the GCP documentation.
The steps for performing a user-provided infrastructure installation into a shared VPC are outlined here. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods.
Important
|
The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of {product-title}. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. |
-
You reviewed details about the {product-title} installation and update processes.
-
You read the documentation on selecting a cluster installation method and preparing it for users.
-
If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-system
namespace, you can manually create and maintain long-term credentials.NoteBe sure to also review this site list if you are configuring a proxy.
Before you can install {product-title}, you must configure a Google Cloud Platform (GCP) project to host it.
For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines.
This section describes the requirements for deploying {product-title} on user-provisioned infrastructure.
modules/installation-machine-requirements.adoc modules/installation-minimum-resource-requirements.adoc
modules/installation-gcp-tested-machine-types.adoc modules/installation-using-gcp-custom-machine-types.adoc
modules/installation-gcp-user-infra-config-host-project-vpc.adoc modules/installation-gcp-dns.adoc modules/installation-creating-gcp-vpc.adoc modules/installation-deployment-manager-vpc.adoc
modules/installation-extracting-infraid.adoc modules/installation-user-infra-exporting-common-variables.adoc
modules/installation-creating-gcp-lb.adoc modules/installation-deployment-manager-ext-lb.adoc modules/installation-deployment-manager-int-lb.adoc
modules/installation-creating-gcp-private-dns.adoc modules/installation-deployment-manager-private-dns.adoc
modules/installation-creating-gcp-firewall-rules-vpc.adoc modules/installation-deployment-manager-firewall-rules.adoc
modules/installation-creating-gcp-iam-shared-vpc.adoc modules/installation-deployment-manager-iam-shared-vpc.adoc
modules/installation-creating-gcp-bootstrap.adoc modules/installation-deployment-manager-bootstrap.adoc
The cluster requires several firewall rules. If you do not use a shared VPC, these rules are created by the Ingress Controller via the GCP cloud provider. When you use a shared VPC, you can either create cluster-wide firewall rules for all services now or create each rule based on events, when the cluster requests access. By creating each rule when the cluster requests access, you know exactly which firewall rules are required. By creating cluster-wide firewall rules, you can apply the same rule set across multiple clusters.
If you choose to create each rule based on events, you must create firewall rules after you provision the cluster and during the life of the cluster when the console notifies you that rules are missing. Events that are similar to the following event are displayed, and you must add the firewall rules that are required:
$ oc get events -n openshift-ingress --field-selector="reason=LoadBalancerManualChange"
Firewall change required by security admin: `gcloud compute firewall-rules create k8s-fw-a26e631036a3f46cba28f8df67266d55 --network example-network --description "{\"kubernetes.io/service-name\":\"openshift-ingress/router-default\", \"kubernetes.io/service-ip\":\"35.237.236.234\"}\" --allow tcp:443,tcp:80 --source-ranges 0.0.0.0/0 --target-tags exampl-fqzq7-master,exampl-fqzq7-worker --project example-project`
If you encounter issues when creating these rule-based events, you can configure the cluster-wide firewall rules while your cluster is running.
-
See About remote health monitoring for more information about the Telemetry service
-
If necessary, you can opt out of remote health reporting.