- Introduction
- Architecture
- Prerequisites
- Deployment Steps
- Validation
- Verify the pod-to-service communication
- Tear Down
- Troubleshooting
- Deleting Resources Manually
- Relevant Materials
Google Cloud networking with Kubernetes Engine clusters can be complex. This project strives to simplify the best practices for exposing cluster services to other clusters and establishing network links between Kubernetes Engine clusters running in separate projects or between a Kubernetes Engine cluster and a cluster running in an on-premises datacenter.
The code contains a set of Deployment Manager templates that allows a user to create networks, subnets, and Kubernetes Engine clusters. This project demonstrates the following best practices.
- Network design of launching Kubernetes Engine clusters in custom networks.
- Assigning node CIDR, container CIDR and service CIDR for Kubernetes Engine clusters.
- IP range management.
- Exposing pods of Kubernetes Engine clusters over networks connected using VPN.
This example also includes Kubernetes manifests for:
- Deploying Nginx pods in clusters.
- Exposing Nginx pods of clusters with different types of services like cluster IP, nodeport, internal load balancer, Network Load Balancer and Ingress.
- Validating pod-to-service communication over networks connected using VPN.
The execution of this demo in the GCP environment creates two custom GCP networks. Each network will have two subnets one in the us-west1 region and the other in the us-central1 region. Each of the subnets hosts a Kubernetes Engine cluster which has nginx pods and services to expose those pods across other clusters. Both networks are connected using VPN. Kubernetes Engine internal load balancers are regional services. VPN gateway per region is needed to reach ILB services in that region. Hence four VPN gateways are created in both projects. Please refer to https://cloud.google.com/compute/docs/load-balancing/internal/#global_routing_issueVPN for more details.
In this project, we are using route-based VPN over policy-based VPN to establish pod-to-service communication. In the VPN tunnel configuration, node CIDR, pod CIDR and service CIDR's from peer remote network need to be added so that nodes, pods and services can reach exposed services from other clusters.
Below is the detailed overview of GCP resources which will be created.
- Subnet: subnet1-us-west1 (10.1.0.0/28)
cluster-ipv4-cidr | service-ipv4-cidr | zone | Initial Node count | Node Image |
---|---|---|---|---|
10.108.0.0/19 | 10.208.0.0/20 | us-west1-b | 3 | COS |
- Subnet: subnet1-us-central1 (10.2.0.0/28)
cluster-ipv4-cidr | service-ipv4-cidr | zone | Initial Node count | Node Image |
---|---|---|---|---|
10.118.0.0/19 | 10.218.0.0/20 | us-central1-b | 3 | COS |
- Cluster IP, Nodeport, ILB, LB and Ingress services to expose pods in each of those clusters.
- VPN gateways
Gateway name | Google IP address | Network | Region | Tunnels |
---|---|---|---|---|
vpn1-deployment-gateway | x.x.x.x | network1 | us-west1 | vpn1-deployment-tunnel |
vpn2-deployment-gateway | x.x.x.x | network1 | us-central1 | vpn2-deployment-tunnel |
- VPN Tunnels
Tunnel name | Status | Google gateway | Google IP address | Google network | Region | Peer IP address | Routing type |
---|---|---|---|---|---|---|---|
vpn1-deployment-tunnel | Established | vpn1-deployment-gateway | x.x.x.x | network1 | us-west1 | vpn3-static-ip | Route-based |
vpn2-deployment-tunnel | Established | vpn2-deployment-gateway | x.x.x.x | network1 | us-central1 | vpn4-static-ip | Route-based |
- Subnet: subnet3-us-west1 (10.11.0.0/28)
cluster-ipv4-cidr | service-ipv4-cidr | zone | Initial Node count | Node Image |
---|---|---|---|---|
10.128.0.0/19 | 10.228.0.0/20 | us-west1-c | 3 | COS |
- Subnet: subnet4-us-central1 (10.12.0.0/28)
cluster-ipv4-cidr | service-ipv4-cidr | zone | Initial Node count | Node Image |
---|---|---|---|---|
10.138.0.0/19 | 10.238.0.0/20 | us-central1-c | 3 | COS |
- Cluster IP, Nodeport, ILB, LB and Ingress services to expose pods in each of those clusters.
- VPN gateways
Gateway name | Google IP address | Network | Region | Tunnels |
---|---|---|---|---|
vpn3-deployment-gateway | x.x.x.x | network2 | us-west1 | vpn3-deployment-tunnel |
vpn4-deployment-gateway | x.x.x.x | network2 | us-central1 | vpn4-deployment-tunnel |
- VPN Tunnels
Tunnel name | Status | Google gateway | Google IP address | Google network | Region | Peer IP address | Routing type |
---|---|---|---|---|---|---|---|
vpn3-deployment-tunnel | Established | vpn3-deployment-gateway | x.x.x.x | network2 | us-west1 | vpn1-static-ip | Route-based |
vpn4-deployment-tunnel | Established | vpn4-deployment-gateway | x.x.x.x | network2 | us-central1 | vpn2-static-ip | Route-based |
- Region for subnets and Node CIDR can be customized in /network/network.yaml.
- Cluster attributes like zone, image, node count, cluster CIDR and service CIDR can be customized in clusters/cluster.yaml.
- To add additional custom attributes to network or clusters, yaml files (.yaml) and deployment manager scripts (.py) at "/network/" or "/clusters/" need to be updated accordingly.
A Google Cloud account and project is required for this.
Access to an existing Google Cloud project with the Kubernetes Engine service enabled If you do not have a Google Cloud account please signup for a free trial here.
Click the button below to run the demo in a Google Cloud Shell.
All the tools for the demo are installed. When using Cloud Shell execute the following command in order to setup gcloud cli.
gcloud init
This project will run on macOS, or in a Google Cloud Shell.
When not using Cloud Shell, the following tools are required.
- gcloud cli ( >= Google Cloud SDK 200.0.0 )
- bash
- kubectl - ( >= v1.10.0-gke.0 )
- jq
- Kubernetes Engine >= 1.10.0-gke.0
- Pull the code from git repo.
- Optionally, customize the configuration in .yaml files under /network/ or /clusters/ or /manifests/, if needed.
- The root folder is the "Kubernetes Engine-networking-demos" folder.
- The "network" folder contains the manifest files and Deployment Manager templates to setup networks.
- The "clusters" folder contains the manifest files and Deployment Manager templates to create Kubernetes Engine clusters.
- The "manifests" folder contains the manifest files to create Kubernetes Engine services.
The following steps will allow a user to run this demo.
- Change directory to
gke-to-gke-vpn
- Run
./install.sh
- Make sure that there are no errors in the install script execution.
- Login to GCP console.
- Use the navigation menu, accessible at the top-left of the console, to select services in the following steps.
- Select "VPC networks" and confirm that CIDR ranges of subnet1-us-west1 is 10.1.0.0/28 and subnet2-us-central1 is 10.2.0.0/28 the specification.
- Select "Compute Engine"-> VM instances and see that the cluster VM instances are are drawn from the subnet's CIDR ranges.
- Select "Kubernetes Engine"->"cluster1" and see that "Container address range" matches the diagram (10.108.0.0/19). Repeat for the other three clusters:
- Repeat for the other three clusters:
- cluster2: 10.118.0.0/19
- cluster3: 10.128.0.0/19
- cluster4: 10.138.0.0/19
- Repeat for the other three clusters:
- Select "Kubernetes Engine"-> "Workloads" and verify that the status is OK for nginx pods.
- Select "Kubernetes Engine" -> "Services" and see that the cluster ip, nodeport, ILB and LB are created for cluster1 and that cluster IP address of all the services for a cluster are drawn the service ipv4 CIDR range
- Try to access the IP of the external load balancer to view the nginx pods. The external IP will be displayed in the "my-nginx-lb" row:
- Change directory to
gke-to-gke-vpn
- Run
./validate.sh
- Clusters in the same region communicate through the internal load balancer.
- Clusters across the different regions communicate through the global load balancer, unless they are peered via VPN. When peered via VPN, clusters can still communicate via internal load balancers.
- All the services created to expose pods in a cluster are accessible to pods within that cluster.
- Refer to validate-pod-to-service-communication.sh script to view the commands to verify pod to service communication.
- Change directory back to project root. Run
./validate-pod-to-service-communication.sh
located in the project root directory - The above script demonstrates how the pods in cluster1 can access the local Kubernetes Engine services and the other Kubernetes Engine Internal/External load balancer services from the same or different regions.
- Change directory to
gke-to-gke-vpn
- Run
./cleanup.sh
- Verify that the script executed with no errors.
- Verify that all the resources created are deleted.
- Remember to enable API's as mentioned in deployment steps where the resources are to be created. Otherwise, API not enabled error is thrown.
- Make sure to have the right permissions for the GCP account to create above GCP/Kubernetes Engine resources. Otherwise, permission denied error is thrown.
- Make sure that the deployments created through install script are deleted before you try to re-install the resources. Otherwise, resources will not be installed properly.
- If there are any errors in cleanup script execution, refer to steps for deleting resources manually.
- Select the project in GCP cloud console.
- Goto Kubernetes Engine -> services. Delete all the services created through install script.
- Goto Network Services -> Load Balancing and delete the load balancers along with associated heathchecks.
- Goto Compute Engine -> VM Instances and delete all the instances created through install script.
- Goto Compute Engine -> Instance Groups and delete all the instance groups created through install script.
- Goto VPC Networks -> Firewall Rules and delete the firewall rules created for network1.
- Goto Deployment Manager -> Deployments and delete vpn, static-ip, cluster and network deployments in the same order.
- Delete the dependent resources if network deployment doesn't get deleted.