Skip to content

Latest commit

 

History

History
112 lines (81 loc) · 4.75 KB

kubernetes-administration.md

File metadata and controls

112 lines (81 loc) · 4.75 KB

EKS cluster management

This document provides instructions for Kubernetes cluster administration (user management). It also focuses on managing cluster addons like kube2iam.

See the README for related documents.

Table of Contents

Introduction

Generally speaking, a new user should refer to the Kubernetes documentation for questions about cluster administration. In this document, I do want to provide a quick reference for actions that I expect to be repeated across all of our clusters. As of today, I know that we will be setting up new users, roles, kube2iam and Calico after every new cluster is created. Until these documented steps are automated, this should be a useful resource.

User Management

EKS user management requires two separate changes. A user or role will need to be created in IAM. That ARN can be added to the aws-auth ConfigMap in Kuberenetes to authorize it to perform certain actions associated with a user and groups.

Allow CodeBuild to deploy

In order to allow CodeBuild to run commands in a EKS Kubernetes cluster, you need to add the user which runs the CodeBuild job to the aws-auth ConfigMap. This process has to be enhanced, and probably future versions of EKS come with a better support for doing this kind of things. For the moment just edit the aws-auth configmap running kubectl edit configmap -n kube-system aws-auth. As an example, the next snippet gives the user rights to deploy into the cluster:

mapRoles:
----
...
- rolearn: arn:aws:iam::320464205386:role/template-codebuild
  groups:
    - system:masters

Add a new user

Configure IAM

Create a new user or role in IAM. Take care to ensure that new users setup MFA. Once this work is complete, make a note of the ARN for the user or role.

Configure ConfigMap

Please read the linked documentation for a comprehensive overview of this process. Here is an example of the udpated ConfigMap might contain:

  mapUsers: |
    - userarn: arn:aws:iam::555555555555:user/admin
      username: admin
      groups:
        - system:masters

Source.

kube2iam setup

kube2iam will allow us to impose strict control over the AWS API calls that can be made by individual pods. This is useful for many reasons but we can imagine having webservice-01 and webservice-02. Each has a set of secrets to perform its function. We can create an IAM role for each web service which only provides access to certain S3 buckets or namespaced Parameter Store values.

We benefit from this if someone compromises the security of a pod.

Testing the configuration

Now, there should be a kube2iam pod running on each worker node in the cluster:

$ kubectl get pods -n kube-system | grep kube2iam
kube2iam-m5vmr             1/1       Running   0          2d
kube2iam-pjzpn             1/1       Running   0          2d

You'll want to refer to the kube2iam documentation to see how annotations are used to specify the role arn to be assumed by pods.

As a general note, I did find the following useful for testing kube2iam. I created a new pod with the following YAML and tested different annotations and resource requests:

apiVersion: v1
kind: Pod
metadata:
  name: aws-cli
  labels:
    name: aws-cli
  annotations:
    iam.amazonaws.com/role: arn:aws:iam::SOMETHING
spec:
  containers:
  - image: fstab/aws-cli
    command:
      - "/home/aws/aws/env/bin/aws"
      - "s3"
      - "ls"
      - "s3://bucket"
    name: aws-cli

Please review the next section for standards on role naming in AWS.

Role naming

Please adhere to these naming conventions when creating new IAM roles that will be referenced in pod annotations.

arn:aws:iam::{{ account_id }}:role/eks-{{ service_name }}-{{ env }}

We have not settled on a templating solution yet. This document will be updated once that is available.