The automation scripts generated by the Cookiecutter leverage seven key technologies with which you should familiarize yourself.
- Terraform Terraform is an open-source infrastructure-as-code software tool created by HashiCorp. Users define and provide data center infrastructure using a declarative configuration language known as HashiCorp Configuration Language (hcl). HCL is a simple that you can learn in a few minutes. On the other hand, the various Terraform libraries that we leverage, mostly provided by AWS and Kubernetes, are knowledge intensive and require time and lots of hands-on experience.
- Terragrunt Terragrunt is a Terraform templating tool for keeping your configurations DRY, working with multiple Terraform modules, and managing remote state. Terragrunt is our strategy for managing multiple environments and (if you so choose) multiple stacks.
- Kubernetes Manifests Manifests are predefined YAML file formats that describe each of the 43 different kinds of resources that are available in a modern Kubernetes cluster. Manifests, applied using the Kubernetes CLI kubectl, are the exclusive means of defining and managing resources in Kubernetes.
- kubectl The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. kubectl is the exlusive means of communicating with a Kubernetes cluster. Terraform leverage kubectl indirectly via the Kubernetes Provider which relies extensively on kubectl for all resources.
- Helm Helm is an open-source graduated Cloud Native Computing Foundation project originally created by DeisLabs as a third-party utility, now known as the package manager for Kubernetes, focused on automating the Kubernetes applications lifecycle in a simple and consistent way. The objective of Helm as package manager is to make an easy and automated management (install, update, or uninstall) of packages for Kubernetes applications, and deploy them with just a few commands. The cookiecutter optionally can install a lot of high quality administrative software for you onto your Kubernetes cluster, and most of these packages are installed using Helm charts.
- Github Actions GitHub Actions makes it easy to automate all your software workflows, now with world-class CI/CD. Build, test, and deploy your code right from GitHub. Make code reviews, branch management, and issue triaging work the way you want.
- AWS Command Line Interface The AWS Command Line Interface (AWS CLI) is a unified tool to manage your AWS services. Terraform manages AWS resources through its indirect use of the AWS CLI which itself is called directly from the Terraform AWS Provider
The Cookiecutter cutter is a good place to get acquainted with these technologies, as all of the source code is templated and follows industry best practices. You'll likely find existing working code patterns in the Cookiecutter for most things that you would want to accomplish on an ad hoc basis.
The following should work for macOS, Linux and Windows. Most of the code in this repository is Terraform or Terragrunt. However, running the Terraform modules will in turn invoke several other software packages; namely, the AWS Command Line Interface awscli, the Kubernetes Command Line Interface kubectl, and Helm. For best results, you should regularly update all of these packages.
$ brew install awscli [email protected] black helm jq k9s kubernetes-cli pre-commit pyyaml terraform terragrunt tflint yq
# configure awscli
# first, follow these instructions to create an IAM keypair: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html
# then afterwards, follow these instruction to configure awscli: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html
$ aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json
# add all Helm charts
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/
$ helm repo add karpenter https://charts.karpenter.sh/
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo add cowboysysop https://cowboysysop.github.io/charts/
$ helm repo add jetstack https://charts.jetstack.io
$ helm repo update
The Github Actions workflows in your new repository will depend on several workflow secrets including two sets of AWS IAM keypairs, one for CI workflows and another for the AWS Simple Email Service. Additionally, they require a Github Personal Access Token (PAT) for a Github user account with all requisite privileges in your new repository as well as any other repositories that are cloned during any of the build / installation pipelines.
Review your global parameters. These will be pre-populated from your responses to the Cookiecutter command-line questionnaire.
locals {
platform_name = "yourschool"
platform_region = "virginia"
root_domain = "yourschool.edu"
aws_region = "us-east-1"
account_id = "123456789012"
}
Review your production environment parameters.
locals {
environment = "courses"
# defaults to this value
environment_domain = "courses.yourschool.edu"
# defaults to this value
environment_namespace = "courses-yourschool-virginia"
# AWS infrastructure default sizing
# 1 vCPU 2gb
mysql_instance_class = "db.t2.small"
# 1 vCPU 1.55gb
redis_node_type = "cache.t2.small"
# 2 vCPU 8gb
eks_worker_group_instance_type = "t3.large"
# 2 vCPU 8gb
eks_karpenter_group_instance_type = "t3.large"
}
The backend build procedure is automated using Terragrunt for Terraform. Installation instructions are avilable at both of these web sites.
Terraform scripts rely on the AWS CLI (Command Line Interface) Tools. Installation instructions for Windows, macOS and Linux are available on this site. We also recommend that you install k9s, a popular tool for adminstering a Kubernetes cluster.
# -------------------------------------
# do this once
# -------------------------------------
cd ./terraform/common/cookiecutter_meta
terraform init
terraform apply
# -------------------------------------
# manage an individual resource
# -------------------------------------
cd ./terraform/environments/prod/mysql
terragrunt init
terragrunt plan
terragrunt apply -target module.cookiecutter_meta
terragrunt apply
terragrunt destroy
# -------------------------------------
# or, build an entire backend all at once
# -------------------------------------
cd ./terraform/environments/prod
terragrunt run-all init
terragrunt run-all apply
The bastion server comes with a complete set of preinstalled and preconfigured software for adminstering your Open edX platform and the AWS cloud resources on which it runs. Cookiecutter ensures that all software versions installed on your bastion server are consistent with those used by tutor, Terraform and Github Actions workflows.
Connect via ssh:
# 1.) retrieve the ssh private key from Kubernetes secrets.
kubectl get secret bastion-ssh-key -n {{ cookiecutter.global_platform_name }}-{{ cookiecutter.global_platform_region }}-{{ cookiecutter.environment_name }} -o json | jq '.data | map_values(@base64d)' | jq -r 'keys[] as $k | "export \($k|ascii_upcase)=\(.[$k])"'
# 2.) save the private key to a file on your local dev machine and set permissions as required by AWS
vim ~/.ssh/bastion.{{ cookiecutter.global_services_subdomain }}.{{ cookiecutter.global_root_domain }}.pem
sudo chmod 400 ~/.ssh/bastion.{{ cookiecutter.global_services_subdomain }}.{{ cookiecutter.global_root_domain }}.pem
# 3.) connect to the bastion server via ssh
ssh bastion.{{ cookiecutter.global_services_subdomain }}.{{ cookiecutter.global_root_domain }} -i ~/.ssh/bastion.{{ cookiecutter.global_services_subdomain }}.{{ cookiecutter.global_root_domain }}.pem
Terraform creates friendly subdomain names for any of the backend services which you are likely to connect: Cloudfront, MySQL, Mongo and Redis. Passwords for the root/admin accounts are accessible from Kubernetes Secrets. Note that each of MySQL, MongoDB and Redis reside in private subnets. These services can only be accessed on the command line from the Bastion.
mysql -h mysql.{{ cookiecutter.global_services_subdomain }}.{{ cookiecutter.global_root_domain }} -u root -p
mongo --port 27017 --host mongo.{{ cookiecutter.global_services_subdomain }}.{{ cookiecutter.global_root_domain }}
redis-cli -h redis.{{ cookiecutter.global_services_subdomain }}.{{ cookiecutter.global_root_domain }} -p 6379
Specifically with regard to MySQL, several 3rd party analytics tools provide out-of-the-box connectivity to MySQL via a bastion server. Following is an example of how to connect to your MySQL environment using MySQL Workbench.
Installs four of the most popular web applications for Kubernetes administration:
- k9s, preinstalled in the optional EC2 Bastion server. K9s is an amazing retro styled, ascii-based UI for viewing and monitoring all aspects of your Kubernetes cluster. It looks and runs great from any ssh-connected terminal window.
- Kubernetes Dashboard. Written by the same team that maintain Kubernetes, Kubernetes Dashboard provides an elegant web UI for monitoring and administering your kubernetes cluster.
- Kubeapps. Maintained by VMWare Bitnami, Kubeapps is the easiest way to install popular open source software packages from MySQL and MongoDB to Wordpress and Drupal.
- Grafana. Provides an elegant web UI to view time series data gathered by prometheus and metrics-server. - user: admin - pwd: prom-operator
If you're new to Kubernetes then you can read more about cluster access in the AWS EKS documentation, Enabling IAM user and role access to your cluster. By default, access to the Kubernetes cluster is limited to the cluster creator (presumably, you) and the IAM user for the bastion server. Also note that by default, AWS EKS release 1.24 and newer encrypts all secrets data using AWS Key Management Service (KMS). The Cookiecutter automatically adds the IAM user for the bastion server to these two lists, but you'll need to add other IAM users to these lists yourself. The encrypted secrets features is optional and can be disabled by setting Cookiecutter parameter eks_create_kms_key=N.
You can add more IAM users to the cluster admin and AWS KMS key owner lists by modifying terraform/stacks/{{cookiecutter.global_platform_shared_resource_identifier}}/kubernetes/terragrunt.hcl, as follows:
kms_key_owners = [
"${local.bastion_iam_arn}",
# -------------------------------------------------------------------------
# ADD MORE CLUSTER ADMIN USER IAM ACCOUNTS TO THE AWS KMS KEY OWNER LIST:
# -------------------------------------------------------------------------
userarn = "arn:aws:iam::${local.account_id}:user/mcdaniel",
userarn = "arn:aws:iam::${local.account_id}:user/bob_marley",
]
map_users = [
{
userarn = local.bastion_iam_arn
username = local.bastion_iam_username
groups = ["system:masters"]
},
# -------------------------------------------------------------------------
# ADD MORE CLUSTER ADMIN USER IAM ACCOUNTS HERE:
# -------------------------------------------------------------------------
{
userarn = "arn:aws:iam::${local.account_id}:user/mcdaniel"
username = "mcdaniel"
groups = ["system:masters"]
},
{
userarn = "arn:aws:iam::${local.account_id}:user/bob_marley"
username = "bob_marley"
groups = ["system:masters"]
},
]
You can use kubectl or k9s from the bastion server to verify the configuration of the aws-auth configMap.
kubectl edit -n kube-system configmap/aws-auth
Following is an example aws-auth configMap with additional IAM user accounts added to the admin "masters" group.
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::012345678942:role/service-eks-node-group-20220518182244174100000002
username: system:node:{% raw %}{{EC2PrivateDNSName}}{% endraw %}
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::012345678942:role/hosting-eks-node-group-20220518182244174100000001
username: system:node:{% raw %}{{EC2PrivateDNSName}}{% endraw %}
mapUsers: |
- groups:
- system:masters
userarn: arn:aws:iam::012345678942:user/mcdaniel
username: mcdaniel
- groups:
- system:masters
userarn: arn:aws:iam::012345678942:user/bob_marley
username: bob_marley
kind: ConfigMap
metadata:
creationTimestamp: "2022-05-18T18:38:29Z"
name: aws-auth
namespace: kube-system
resourceVersion: "499488"
uid: 52d6e7fd-01b7-4c80-b831-b971507e5228
You can verify the AWS KMS key owner list either by using the AWS console (https://console.aws.amazon.com/kms) or using the awscli, as follows:
aws kms list-keys
aws kms get-key-policy --key-id 7d646c73-some-key-id-760gda80bfe2 --region us-east-2 --policy-name default --output text
Both the Build as well as the Deploy workflows will be pre-configured based on your responses to the Cookiecutter questionnaire.
The automated Github Actions workflow "Build openedx Image" in your new repository will build a customized Open edX Docker container based on the latest stable version of Open edX and your Open edX custom theme repository and Open edX plugin repository. Your new Docker image will be automatically uploaded to AWS Amazon Elastic Container Registry.
The automated Github Actions workflow "prod Deploy to Kubernetes" in your new repository will deploy your customized Docker container to a Kubernetes Cluster. You can optionall run the Github Actions workflow "prod Deploy optional Open edX modules to Kubernetes" to install all optional modules and plugins as well as the base Open edX platform software.