diff --git a/docs/README.md b/docs/README.md index e93c8f6b5..fc9b30bf1 100644 --- a/docs/README.md +++ b/docs/README.md @@ -7,10 +7,10 @@ Every cloud has a concept of a "network". AWS and GCP calls it the VPC. The VP will hold everything that you will ever run or create in the cloud. Items such as instances, subnets, firewall rules, databases, queues, load balancers, etc. Since it is such a foundational piece that sits at pretty much the bottom of the stack it -is very important to get this correct because trying to make changes to this laster +is very important to get this correct because trying to make changes to this later with everything running on it could turn out to be very difficult or impossible without downtime and/or a lot of reconfiguration of items that are running in -this VPC. +this VPC. We also want to take control of creation and managing this VPC exclusively. A lot of tools that creates Kubernetes clusters for you has the option of creating the @@ -125,7 +125,7 @@ not everyone has access to it. Depending on your requirements, you might limit access or even have to go through some approvals to get access to any parts of this infrastructure. -why so many? +### Why so many? By doing this you have environments like `dev` where developers and delivering new application code into it while they are working and testing it. This code diff --git a/docs/accessing-private-vpc-from-ci-system.md b/docs/accessing-private-vpc-from-ci-system.md index ce689913b..284e3c9b5 100644 --- a/docs/accessing-private-vpc-from-ci-system.md +++ b/docs/accessing-private-vpc-from-ci-system.md @@ -4,10 +4,10 @@ A lot of the popular CI/CD systems that are hosted and are on the internet: * Github Actions * Gitlab -* CiricleCI +* CircleCI * CodeFresh -The best practice for our VPCs and Kubernetes cluster is to have only internal addresses. +The best practice for our VPCs and Kubernetes cluster is to have only an internal addresses. The problem is how do these CI/CD systems get access to our private VPC and Kubernetes clusters which do not any any public IPs it can reach? @@ -39,7 +39,7 @@ Doc: [https://aws.amazon.com/blogs/aws/new-port-forwarding-using-aws-system-mana This can potentially span access from the CI/CD system to a private VPC network. -This is however, a unquiely an AWS only solution since other cloud providers does not have something like this. +This is however, a uniquely an AWS only solution since other cloud providers do not have something like this. ## Slack overlay network This is an interesting idea on how to span networks: [https://slack.engineering/introducing-nebula-the-open-source-global-overlay-network-from-slack-884110a5579](https://slack.engineering/introducing-nebula-the-open-source-global-overlay-network-from-slack-884110a5579) @@ -65,31 +65,31 @@ back the output. 1) The CI/CD system is instructed to run this Fargate container 2) Launching the Fargate container -* This "step" should have the approapriate AWS IAM access to launch this. +* This "step" should have the appropriate AWS IAM access to launch this. * It will launch the predetermined container on Fargate in the targeted private VPC. -* This step will call the AWS API with the appropriate information to launch the Fargate task +* This step will call the AWS API with the appropriate information to launch the Fargate task. 3) Fargate container launches -* The Fargate container launches inside the VPC that was targeted -* This container runs +* The Fargate container launches inside the VPC that was targeted. +* This container runs. 4) The Kubernetes update process -* The container runs through to update Kubernetes and whatever else this container is programed to do +* The container runs through to update Kubernetes and whatever else this container is programmed to do. 5) Fargate container logs -* Logs from the Fargate container is extracted and outputted to the CI/CD systems output -* This allows someone to inspect this pipeline run from the CI/CD system on what happend +* Logs from the Fargate container is extracted and outputted to the CI/CD systems output. +* This allows someone to inspect this pipeline run from the CI/CD system on what happened. There are some pros and cons to this solution: Pros: -* Does not require any VPN type connections between the CI/CD system and the remote private VPC +* Does not require any VPN type connections between the CI/CD system and the remote private VPC. * A developer can test the update logic (#4) locally. Generally these pipelines cannot be tested locally because the CI/CD system has to run the pipeline. Since it is disconnected, this means the developer can run this locally to test if it is working as expected. -* This scheme would work on most major cloud provider that has a "container as a service" offering +* This scheme would work on most major cloud provider that has a "container as a service" offering. Cons: -* This disconnects the CI/CD system from the actual run -* Changing the update logic (#4) will mean having to push a new container to the Fargate runner +* This disconnects the CI/CD system from the actual run. +* Changing the update logic (#4) will mean having to push a new container to the Fargate runner. Example Github Action: diff --git a/docs/cidr-ranges.md b/docs/cidr-ranges.md index 5b37c946d..cbfc3147b 100644 --- a/docs/cidr-ranges.md +++ b/docs/cidr-ranges.md @@ -23,7 +23,7 @@ http://www.subnet-calculator.com/cidr.php | Kubernetes gcp - prod | 10.23.0.0/16 | ## Reserved ranged for each environment -Each envrionment has a bunch of initial reserved ranges to bring up the entire +Each environment has a bunch of initial reserved ranges to bring up the entire application. The following defines these ranges in a generic sense that can be applied to any of the above CIDRs. diff --git a/docs/github-actions.md b/docs/github-actions.md index 5a117d3f2..2f1d46efe 100644 --- a/docs/github-actions.md +++ b/docs/github-actions.md @@ -44,7 +44,7 @@ in the workflow. sonobuoy version ``` -This is a working verison of it but there were many itterations before I got the tar output correct +This is a working version of it but there were many iterations before I got the tar output correct and what it outputted and where the `sonobuoy` (tool) binary was. To debug this you start doing stuff like: ```yaml @@ -109,6 +109,6 @@ are community maintained. The developer didn't have to know much about Airflow know the Airflow entry points and how to hook into it. This leads us back to my problem. I am having all of the same problems that they describe in the blog and all of the same solutions -would work for my problem. In my case, the Github Action is equivelent to Airflow which I really do not want to debug. Github Action +would work for my problem. In my case, the Github Action is equivalent to Airflow which I really do not want to debug. Github Action also can just run a Docker container for me. If I made Github Action run my container, then I can develop all I want locally until it works then try to have Github Actions to run it for me. diff --git a/docs/kubernetes-security/README.md b/docs/kubernetes-security/README.md index 3ad5bb6ec..99bdf681b 100644 --- a/docs/kubernetes-security/README.md +++ b/docs/kubernetes-security/README.md @@ -1,8 +1,8 @@ # Kubernetes Security -This page is here to describe security challenages and possible solutions to various security concerns in a +This page is here to describe security challenges and possible solutions to various security concerns in a Kubernetes deployment. -## Traditional n-tier archtecture +## Traditional n-tier architecture This diagram represents a non-containerized n-tier architecture: ![the stack](/docs/kubernetes-security/images/n-tier-application-architecture.png) diff --git a/docs/the-easier-way.md b/docs/the-easier-way.md index 165eccf98..3bffcd782 100644 --- a/docs/the-easier-way.md +++ b/docs/the-easier-way.md @@ -26,7 +26,7 @@ From the output of the Terraform run, a VPC ID was outputted in the format of The following paths all starts from the root of this repository. ## Terraform environment \_env_defaults file -This file holds default values about this environment. We are adding in the +This file hold default values about this environment. We are adding in the VPC ID here because there will be subsequent Terraforms that will use this ID and place itself into this VPC. @@ -57,7 +57,7 @@ cd clusters/aws/kops The Kubernetes cluster that is created is a fully private Kubernetes cluster with no public IP addresses. This means that you will have to get to the cluster some how via a bastion host to be able to interact with it. During the setup, a -bastion host was created for you and the following steps shows you how to +bastion host was created for you. The following steps shows you how to connect to it and create a tunnel. ``` diff --git a/docs/the-manual-way.md b/docs/the-manual-way.md index 7aade27c0..1c7d97b3e 100644 --- a/docs/the-manual-way.md +++ b/docs/the-manual-way.md @@ -8,7 +8,7 @@ holds the infrastructure level items. See [tools.md](tools.md) # Setup your IP CIDR -This document contains how your IP CIDRs are going to be laided out for your +This document contains how your IP CIDRs are going to be laid out for your entire infrastructure. Care should be taken to review this and to make sure this fits your needs. @@ -33,7 +33,9 @@ Directory: `/tf-environment` Change directory to: `/tf-environments/aws/dev/dev/vpc` -A note about the Terraform statestore. We are using S3 for the statestore and S3 bucket names has to be globally unique. The file `/tf-environments/aws/dev/terragrunt.hcl` holds the state store configurations. It is set to `kubernetes-ops-tf-state-${get_aws_account_id()}-terraform-state`. It puts your AWS Account ID in there as the "unique" key. +A note about the Terraform state store. We are using S3 for the state store and S3 bucket names has to be globally unique. +The file `/tf-environments/aws/dev/terragrunt.hcl` holds the state store configurations. +It is set to `kubernetes-ops-tf-state-${get_aws_account_id()}-terraform-state`. It puts your AWS Account ID in there as the "unique" key. Run: ``` diff --git a/docs/tools.md b/docs/tools.md index fa5b405d8..fba7cbd69 100644 --- a/docs/tools.md +++ b/docs/tools.md @@ -35,7 +35,7 @@ Install instructions: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap ## sshuttle `sshuttle` is a tool that will create an SSH tunnel from your local laptop -to a remote network and forward everything destine for that IP space over there +to a remote network and forward everything destined for that IP space over there with DNS resolution. It uses ssh to create the tunnel. Why not just use SSH? SSH does not have the functionality to forward the entire