Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Adding AWS IAM Identity Center with CAM pattern #1897

Merged
merged 9 commits into from
Apr 9, 2024
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions docs/patterns/sso-iam-identity-center-cam.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
title: SSO - IAM Identity Center
---

{%
include-markdown "../../patterns/sso-iam-identity-center/README.md"
%}
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# IAM Identity Center Single Sign-On for Amazon EKS Cluster
# IAM Identity Center Single Sign-On for Amazon EKS Cluster with `aws-auth` ConfigMap

This example demonstrates how to deploy an Amazon EKS cluster that is deployed on the AWS Cloud, integrated with IAM Identity Center (former AWS SSO) as an the Identity Provider (IdP) for Single Sign-On (SSO) authentication. The configuration for authorization is done using Kubernetes Role-based access control (RBAC).
This example demonstrates how to deploy an Amazon EKS cluster that is deployed on the AWS Cloud, integrated with IAM Identity Center (former AWS SSO) as an the Identity Provider (IdP) for Single Sign-On (SSO) authentication having the [`aws-auth` ConfigMap](https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html) as the entry point. The configuration for authorization is done using Kubernetes Role-based access control (RBAC).

## Deploy

Expand All @@ -14,10 +14,11 @@ To do that use the link provided in the email invite - *if you added a valid ema

With the active users, use one of the `terraform output` examples to configure your AWS credentials for SSO, as shown in the examples below. After you choose the *SSO registration scopes*, your browser windows will appear and request to login using your IAM Identity Center username and password.

**Admin user example**
```
### Admin user example

```bash
configure_sso_admin = <<EOT
rodrigobersa marked this conversation as resolved.
Show resolved Hide resolved
# aws configure sso
# aws configure sso --profile EKSClusterAdmin
SSO session name (Recommended): <SESSION_NAME>
SSO start URL [None]: https://d-1234567890.awsapps.com/start
SSO region [None]: us-west-2
Expand All @@ -39,15 +40,16 @@ configure_sso_admin = <<EOT

To use this profile, specify the profile name using --profile, as shown:

aws eks --region us-west-2 update-kubeconfig --name iam-identity-center --profile EKSClusterAdmin-123456789012
aws eks --region us-west-2 update-kubeconfig --name iam-identity-center --profile EKSClusterAdmin

EOT
```

**Read-only user example**
```
### Read-only user example

```bash
configure_sso_user = <<EOT
# aws configure sso
# aws configure sso --profile EKSClusterUser
SSO session name (Recommended): <SESSION_NAME>
SSO start URL [None]: https://d-1234567890.awsapps.com/start
SSO region [None]: us-west-2
Expand All @@ -69,14 +71,14 @@ configure_sso_user = <<EOT

To use this profile, specify the profile name using --profile, as shown:

aws eks --region us-west-2 update-kubeconfig --name iam-identity-center --profile EKSClusterUser-123456789012
aws eks --region us-west-2 update-kubeconfig --name iam-identity-center --profile EKSClusterUser

EOT
```

With the `kubeconfig` configured, you'll be able to run `kubectl` commands in your Amazon EKS Cluster with the impersonated user. The read-only user has a `cluster-viewer` Kubernetes role bound to it's group, whereas the admin user, has the `admin` Kubernetes role bound to it's group.

```
```bash
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
amazon-guardduty aws-guardduty-agent-bl2v2 1/1 Running 0 3h54m
Expand All @@ -92,14 +94,16 @@ kube-system kube-proxy-p4f5g 1/1 Running 0 3h54
kube-system kube-proxy-q1fmc 1/1 Running 0 3h54m
```

You can also use the `configure_kubectl` output to assume the *Cluster creator* role with `cluster-admin` access.
If not revoked after the cluster creation, it's possible to use the `configure_kubectl` output to assume the *Cluster creator* role with `cluster-admin` access.

```
```bash
configure_kubectl = "aws eks --region us-west-2 update-kubeconfig --name iam-identity-center"
```

## Destroy

{%
include-markdown "../../docs/_partials/destroy.md"
%}
```bash
terraform destroy -target module.developers_team -target module.operators._team -auto-approve
terraform destroy -target module.eks -auto-approve
terraform destroy -auto-approve
```
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ provider "aws" {
data "aws_availability_zones" "available" {}

locals {
name = basename(path.cwd)
name = "sso-${basename(path.cwd)}"
region = "us-west-2"

vpc_cidr = "10.0.0.0/16"
Expand All @@ -23,7 +23,7 @@ locals {

module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 19.21"
version = "~> 20.0"

cluster_name = local.name
cluster_version = "1.29"
Expand All @@ -49,6 +49,14 @@ module "eks" {
}
}

tags = local.tags

}

module "aws_auth" {
rodrigobersa marked this conversation as resolved.
Show resolved Hide resolved
source = "terraform-aws-modules/eks/aws//modules/aws-auth"
version = "~> 20.0"

manage_aws_auth_configmap = true
aws_auth_roles = flatten(
[
Expand All @@ -57,7 +65,6 @@ module "eks" {
]
)

tags = local.tags
}

################################################################################
Expand Down
112 changes: 112 additions & 0 deletions patterns/sso-iam-identity-center/cam/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
# IAM Identity Center Single Sign-On for Amazon EKS Cluster with Cluster Access Manager
rodrigobersa marked this conversation as resolved.
Show resolved Hide resolved

This example demonstrates how to deploy an Amazon EKS cluster that is deployed on the AWS Cloud, integrated with IAM Identity Center (former AWS SSO) as an the Identity Provider (IdP) for Single Sign-On (SSO) authentication having [Cluster Access Manager](https://aws.amazon.com/about-aws/whats-new/2023/12/amazon-eks-controls-iam-cluster-access-management/) as the entry point API. The configuration to provide authorization for users is simplified using a combination of [Access Entries](https://docs.aws.amazon.com/eks/latest/userguide/access-entries.html) and Kubernetes Role-based access control (RBAC).

## Deploy

See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern.

## Validate

After the `terraform` commands are executed successfully, check if the newly created users are active.

To do that use the link provided in the email invite - *if you added a valid email address for your users either in your Terraform code or IAM Identity Center Console* - or go to the [IAM Identity Center Console](https://console.aws.amazon.com/singlesignon/home/), in the *Users* dashboard on the left hand side menu, then select the user, and click on *Reset password* button on the upper right corner. Choose the option to *Generate a one-time password and share the password with the user*.

With the active users, use one of the `terraform output` examples to configure your AWS credentials for SSO, as shown in the examples below. After you choose the *SSO registration scopes*, your browser windows will appear and request to login using your IAM Identity Center username and password.

### Admin user example

```bash
configure_sso_admin = <<EOT
# aws configure sso --profile EKSClusterAdmin
SSO session name (Recommended): <SESSION_NAME>
SSO start URL [None]: https://d-1234567890.awsapps.com/start
SSO region [None]: us-west-2
SSO registration scopes [sso:account:access]:
Attempting to automatically open the SSO authorization page in your default browser.
If the browser does not open or you wish to use a different device to authorize this request, open the following URL:

https://device.sso.us-west-2.amazonaws.com/

Then enter the code:

The only AWS account available to you is: 123456789012
Using the account ID 123456789012
The only role available to you is: EKSClusterAdmin
Using the role name EKSClusterAdmin
CLI default client Region [us-west-2]: us-west-2
CLI default output format [json]: json
CLI profile name [EKSClusterAdmin-123456789012]:

To use this profile, specify the profile name using --profile, as shown:

aws eks --region us-west-2 update-kubeconfig --name iam-identity-center --profile EKSClusterAdmin


EOT
```

### Read-only user example

```bash
configure_sso_user = <<EOT
# aws configure sso --profile EKSClusterUser
SSO session name (Recommended): <SESSION_NAME>
SSO start URL [None]: https://d-1234567890.awsapps.com/start
SSO region [None]: us-west-2
SSO registration scopes [sso:account:access]:
Attempting to automatically open the SSO authorization page in your default browser.
If the browser does not open or you wish to use a different device to authorize this request, open the following URL:

https://device.sso.us-west-2.amazonaws.com/

Then enter the code:

The only AWS account available to you is: 123456789012
Using the account ID 123456789012
The only role available to you is: EKSClusterUser
Using the role name EKSClusterUser
CLI default client Region [us-west-2]: us-west-2
CLI default output format [json]: json
CLI profile name [EKSClusterUser-123456789012]:

To use this profile, specify the profile name using --profile, as shown:

aws eks --region us-west-2 update-kubeconfig --name iam-identity-center --profile EKSClusterUser

EOT
```

With the `kubeconfig` configured, you'll be able to run `kubectl` commands in your Amazon EKS Cluster with the impersonated user. The read-only user has a `cluster-viewer` Kubernetes role bound to it's group, whereas the admin user, has the `admin` Kubernetes role bound to it's group.

```bash
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
amazon-guardduty aws-guardduty-agent-bl2v2 1/1 Running 0 3h54m
amazon-guardduty aws-guardduty-agent-s2vcx 1/1 Running 0 3h54m
amazon-guardduty aws-guardduty-agent-w8gfc 1/1 Running 0 3h54m
kube-system aws-node-m9hmd 1/1 Running 0 3h53m
kube-system aws-node-w42b8 1/1 Running 0 3h53m
kube-system aws-node-wm6rm 1/1 Running 0 3h53m
kube-system coredns-6ff9c46cd8-94jlr 1/1 Running 0 3h59m
kube-system coredns-6ff9c46cd8-n2mrb 1/1 Running 0 3h59m
kube-system kube-proxy-7fb86 1/1 Running 0 3h54m
kube-system kube-proxy-p4f5g 1/1 Running 0 3h54m
kube-system kube-proxy-q1fmc 1/1 Running 0 3h54m
```

If not revoked after the cluster creation, it's possible to use the `configure_kubectl` output to assume the *Cluster creator* role with `cluster-admin` access.

```bash
configure_kubectl = "aws eks --region us-west-2 update-kubeconfig --name iam-identity-center"
```

## Destroy

If you revoked the *Cluster creator* `cluster-admin` permission, you may need to re-associate the `AmazonEKSClusterAdminPolicy` access entry to run `terraform destroy`.

```bash
terraform destroy -target module.developers_team -auto-approve
terraform destroy -target module.eks -auto-approve
terraform destroy -auto-approve
```
119 changes: 119 additions & 0 deletions patterns/sso-iam-identity-center/cam/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,119 @@
provider "aws" {
region = local.region
}

data "aws_availability_zones" "available" {}

locals {
name = "sso-${basename(path.cwd)}"
region = "us-west-2"

vpc_cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3)

tags = {
Blueprint = local.name
GithubRepo = "github.com/aws-ia/terraform-aws-eks-blueprints"
}
}

################################################################################
# Cluster
################################################################################

module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.0"

cluster_name = local.name
cluster_version = "1.29"
cluster_endpoint_public_access = true

# EKS Addons
cluster_addons = {
coredns = {}
kube-proxy = {}
vpc-cni = {}
}

vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets

eks_managed_node_groups = {
initial = {
instance_types = ["m5.large"]

min_size = 1
max_size = 5
desired_size = 3
}
}
# Give the Terraform identity admin access to the cluster just for the deployment phase.
# You can revoke this permissions cluster is created since the below referenced "operators" have the same access level
enable_cluster_creator_admin_permissions = true

# This will set the cluster authentication use API only, meaning that the `aws-auth` configMap will not work on this example.
authentication_mode = "API"

access_entries = {
# One access entry with a policy associated
operators = {
principal_arn = tolist(data.aws_iam_roles.admin.arns)[0]

policy_associations = {
operators = {
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"
access_scope = {
type = "cluster"
}
}
}
},
developers = {
kubernetes_groups = ["eks-developers"]
principal_arn = tolist(data.aws_iam_roles.user.arns)[0]

policy_associations = {
developers = {
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy"
access_scope = {
namespaces = ["default"]
type = "namespace"
}
}
}
}
}

tags = local.tags
}

################################################################################
# Supporting Resources
################################################################################

module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0"

manage_default_vpc = true

name = local.name
cidr = local.vpc_cidr

azs = local.azs
private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)]
public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]

enable_nat_gateway = true
single_nat_gateway = true

private_subnet_tags = {
"kubernetes.io/role/internal-elb" = 1
}
public_subnet_tags = {
"kubernetes.io/role/elb" = 1
}

tags = local.tags
}
Loading
Loading