Skip to content

Commit

Permalink
Grammar pass (#67)
Browse files Browse the repository at this point in the history
* Grammar pass

While reading all the documentation and going through the steps for learning purposes fixed up some of the grammar.

* Grammar x2

I screwed up a correction of a correction. This corrects that.
  • Loading branch information
topagae authored Feb 18, 2020
1 parent fba7119 commit d58aca6
Show file tree
Hide file tree
Showing 8 changed files with 29 additions and 27 deletions.
6 changes: 3 additions & 3 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,10 @@ Every cloud has a concept of a "network". AWS and GCP calls it the VPC. The VP
will hold everything that you will ever run or create in the cloud. Items such as instances,
subnets, firewall rules, databases, queues, load balancers, etc. Since it is
such a foundational piece that sits at pretty much the bottom of the stack it
is very important to get this correct because trying to make changes to this laster
is very important to get this correct because trying to make changes to this later
with everything running on it could turn out to be very difficult or impossible
without downtime and/or a lot of reconfiguration of items that are running in
this VPC.
this VPC.

We also want to take control of creation and managing this VPC exclusively. A lot
of tools that creates Kubernetes clusters for you has the option of creating the
Expand Down Expand Up @@ -125,7 +125,7 @@ not everyone has access to it. Depending on your requirements, you might limit
access or even have to go through some approvals to get access to any parts of this
infrastructure.

why so many?
### Why so many?

By doing this you have environments like `dev` where developers and delivering
new application code into it while they are working and testing it. This code
Expand Down
28 changes: 14 additions & 14 deletions docs/accessing-private-vpc-from-ci-system.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,10 @@ A lot of the popular CI/CD systems that are hosted and are on the internet:

* Github Actions
* Gitlab
* CiricleCI
* CircleCI
* CodeFresh

The best practice for our VPCs and Kubernetes cluster is to have only internal addresses.
The best practice for our VPCs and Kubernetes cluster is to have only an internal addresses.

The problem is how do these CI/CD systems get access to our private VPC and Kubernetes clusters
which do not any any public IPs it can reach?
Expand Down Expand Up @@ -39,7 +39,7 @@ Doc: [https://aws.amazon.com/blogs/aws/new-port-forwarding-using-aws-system-mana

This can potentially span access from the CI/CD system to a private VPC network.

This is however, a unquiely an AWS only solution since other cloud providers does not have something like this.
This is however, a uniquely an AWS only solution since other cloud providers do not have something like this.

## Slack overlay network
This is an interesting idea on how to span networks: [https://slack.engineering/introducing-nebula-the-open-source-global-overlay-network-from-slack-884110a5579](https://slack.engineering/introducing-nebula-the-open-source-global-overlay-network-from-slack-884110a5579)
Expand All @@ -65,31 +65,31 @@ back the output.
1) The CI/CD system is instructed to run this Fargate container

2) Launching the Fargate container
* This "step" should have the approapriate AWS IAM access to launch this.
* This "step" should have the appropriate AWS IAM access to launch this.
* It will launch the predetermined container on Fargate in the targeted private VPC.
* This step will call the AWS API with the appropriate information to launch the Fargate task
* This step will call the AWS API with the appropriate information to launch the Fargate task.

3) Fargate container launches
* The Fargate container launches inside the VPC that was targeted
* This container runs
* The Fargate container launches inside the VPC that was targeted.
* This container runs.

4) The Kubernetes update process
* The container runs through to update Kubernetes and whatever else this container is programed to do
* The container runs through to update Kubernetes and whatever else this container is programmed to do.

5) Fargate container logs
* Logs from the Fargate container is extracted and outputted to the CI/CD systems output
* This allows someone to inspect this pipeline run from the CI/CD system on what happend
* Logs from the Fargate container is extracted and outputted to the CI/CD systems output.
* This allows someone to inspect this pipeline run from the CI/CD system on what happened.

There are some pros and cons to this solution:

Pros:
* Does not require any VPN type connections between the CI/CD system and the remote private VPC
* Does not require any VPN type connections between the CI/CD system and the remote private VPC.
* A developer can test the update logic (#4) locally. Generally these pipelines cannot be tested locally because the CI/CD system has to run the pipeline. Since it is disconnected, this means the developer can run this locally to test if it is working as expected.
* This scheme would work on most major cloud provider that has a "container as a service" offering
* This scheme would work on most major cloud provider that has a "container as a service" offering.

Cons:
* This disconnects the CI/CD system from the actual run
* Changing the update logic (#4) will mean having to push a new container to the Fargate runner
* This disconnects the CI/CD system from the actual run.
* Changing the update logic (#4) will mean having to push a new container to the Fargate runner.

Example Github Action:

Expand Down
2 changes: 1 addition & 1 deletion docs/cidr-ranges.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ http://www.subnet-calculator.com/cidr.php
| Kubernetes gcp - prod | 10.23.0.0/16 |

## Reserved ranged for each environment
Each envrionment has a bunch of initial reserved ranges to bring up the entire
Each environment has a bunch of initial reserved ranges to bring up the entire
application. The following defines these ranges in a generic sense that can
be applied to any of the above CIDRs.

Expand Down
4 changes: 2 additions & 2 deletions docs/github-actions.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ in the workflow.
sonobuoy version
```
This is a working verison of it but there were many itterations before I got the tar output correct
This is a working version of it but there were many iterations before I got the tar output correct
and what it outputted and where the `sonobuoy` (tool) binary was. To debug this you start doing stuff like:

```yaml
Expand Down Expand Up @@ -109,6 +109,6 @@ are community maintained. The developer didn't have to know much about Airflow
know the Airflow entry points and how to hook into it.

This leads us back to my problem. I am having all of the same problems that they describe in the blog and all of the same solutions
would work for my problem. In my case, the Github Action is equivelent to Airflow which I really do not want to debug. Github Action
would work for my problem. In my case, the Github Action is equivalent to Airflow which I really do not want to debug. Github Action
also can just run a Docker container for me. If I made Github Action run my container, then I can develop all I want locally until it works
then try to have Github Actions to run it for me.
4 changes: 2 additions & 2 deletions docs/kubernetes-security/README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Kubernetes Security
This page is here to describe security challenages and possible solutions to various security concerns in a
This page is here to describe security challenges and possible solutions to various security concerns in a
Kubernetes deployment.

## Traditional n-tier archtecture
## Traditional n-tier architecture
This diagram represents a non-containerized n-tier architecture:

![the stack](/docs/kubernetes-security/images/n-tier-application-architecture.png)
Expand Down
4 changes: 2 additions & 2 deletions docs/the-easier-way.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ From the output of the Terraform run, a VPC ID was outputted in the format of
The following paths all starts from the root of this repository.

## Terraform environment \_env_defaults file
This file holds default values about this environment. We are adding in the
This file hold default values about this environment. We are adding in the
VPC ID here because there will be subsequent Terraforms that will use this ID
and place itself into this VPC.

Expand Down Expand Up @@ -57,7 +57,7 @@ cd clusters/aws/kops
The Kubernetes cluster that is created is a fully private Kubernetes cluster with
no public IP addresses. This means that you will have to get to the cluster some
how via a bastion host to be able to interact with it. During the setup, a
bastion host was created for you and the following steps shows you how to
bastion host was created for you. The following steps shows you how to
connect to it and create a tunnel.

```
Expand Down
6 changes: 4 additions & 2 deletions docs/the-manual-way.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ holds the infrastructure level items.
See [tools.md](tools.md)

# Setup your IP CIDR
This document contains how your IP CIDRs are going to be laided out for your
This document contains how your IP CIDRs are going to be laid out for your
entire infrastructure. Care should be taken to review this and to make sure
this fits your needs.

Expand All @@ -33,7 +33,9 @@ Directory: `<repo root>/tf-environment`

Change directory to: `<repo root>/tf-environments/aws/dev/dev/vpc`

A note about the Terraform statestore. We are using S3 for the statestore and S3 bucket names has to be globally unique. The file `<repo root>/tf-environments/aws/dev/terragrunt.hcl` holds the state store configurations. It is set to `kubernetes-ops-tf-state-${get_aws_account_id()}-terraform-state`. It puts your AWS Account ID in there as the "unique" key.
A note about the Terraform state store. We are using S3 for the state store and S3 bucket names has to be globally unique.
The file `<repo root>/tf-environments/aws/dev/terragrunt.hcl` holds the state store configurations.
It is set to `kubernetes-ops-tf-state-${get_aws_account_id()}-terraform-state`. It puts your AWS Account ID in there as the "unique" key.

Run:
```
Expand Down
2 changes: 1 addition & 1 deletion docs/tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Install instructions: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap

## sshuttle
`sshuttle` is a tool that will create an SSH tunnel from your local laptop
to a remote network and forward everything destine for that IP space over there
to a remote network and forward everything destined for that IP space over there
with DNS resolution. It uses ssh to create the tunnel.

Why not just use SSH? SSH does not have the functionality to forward the entire
Expand Down

0 comments on commit d58aca6

Please sign in to comment.