Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: Update targeted ODCR to show p5 to be reflect current usage #1946

Merged
merged 1 commit into from
May 13, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions patterns/ml-capacity-block/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@

This pattern demonstrates how to consume/utilize ML capacity block reservations (CBR) with Amazon EKS. The solution is comprised of primarily 2 components:

!!! warning
The use of self-managed node group(s) are required at this time to support capacity block reservations within EKS. This pattern will be updated to demonstrate EKS managed node groups once support has been implemented by the EKS service.

1. The self-managed node group that will utilize the CBR should have the subnets provided to it restricted to the availability zone where the CBR has been allocated. For example - if the CBR is allocated to `us-west-2b`, the node group should only have subnet IDs provided to it that reside in `us-west-2b`. If the subnets that reside in other AZs are provided, its possible to encounter an error such as `InvalidParameterException: The following supplied instance types do not exist ...`. It is not guaranteed that this error will always be shown, and may appear random since the underlying autoscaling group(s) will provision nodes into different AZs at random. It will only occur when the underlying autoscaling group tries to provision instances into an AZ where capacity is not allocated and there is insufficient on-demand capacity for the desired instance type.

!!! warning
The use of self-managed node group(s) are required at this time to support capacity block reservations within EKS. This pattern will be updated to demonstrate EKS managed node groups once support has been implemented by the EKS service.

2. The launch template utilized should specify the `instance_market_options` and `capacity_reservation_specification` arguments. This is how the CBR is utilized by the node group (i.e. - tells the autoscaling group to launch instances utilizing provided capacity reservation).

<b>Links:</b>
Expand All @@ -16,7 +16,7 @@ This pattern demonstrates how to consume/utilize ML capacity block reservations

## Code

```terraform hl_lines="53-93"
```terraform hl_lines="5-11 54-56 84-92"
{% include "../../patterns/ml-capacity-block/eks.tf" %}
```

Expand Down
59 changes: 6 additions & 53 deletions patterns/ml-capacity-block/eks.tf
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
# on how to obtain a ML capacity block reservation. Once acquired, you can provide
# the reservation ID through this input to deploy the pattern
variable "capacity_reservation_id" {
description = "The ID of the ML capacity block reservation to use for the node group"
description = "The ID of the ML capacity block reservation for the node group"
type = string
}

Expand All @@ -27,9 +27,10 @@ module "eks" {
cluster_endpoint_public_access = true

cluster_addons = {
coredns = {}
kube-proxy = {}
vpc-cni = {}
coredns = {}
eks-pod-identity-agent = {}
kube-proxy = {}
vpc-cni = {}
}

# Add security group rules on the node group security group to
Expand All @@ -53,7 +54,7 @@ module "eks" {
# Note: ML capacity block reservations are only supported
# on self-managed node groups at this time
self_managed_node_groups = {
odcr = {
cbr = {
# The EKS AL2 GPU AMI provides all of the necessary components
# for accelerated workloads w/ EFA
ami_type = "AL2_x86_64_GPU"
Expand Down Expand Up @@ -94,51 +95,3 @@ module "eks" {

tags = local.tags
}

################################################################################
# Helm charts
################################################################################

resource "helm_release" "nvidia_device_plugin" {
name = "nvidia-device-plugin"
repository = "https://nvidia.github.io/k8s-device-plugin"
chart = "nvidia-device-plugin"
version = "0.14.5"
namespace = "nvidia-device-plugin"
create_namespace = true
wait = false

values = [
<<-EOT
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: 'nvidia.com/gpu.present'
operator: In
values:
- 'true'
EOT
]
}

resource "helm_release" "aws_efa_device_plugin" {
name = "aws-efa-k8s-device-plugin"
repository = "https://aws.github.io/eks-charts"
chart = "aws-efa-k8s-device-plugin"
version = "v0.4.4"
namespace = "kube-system"
wait = false

values = [
<<-EOT
nodeSelector:
vpc.amazonaws.com/efa.present: 'true'
tolerations:
- key: nvidia.com/gpu
operator: Exists
effect: NoSchedule
EOT
]
}
47 changes: 47 additions & 0 deletions patterns/ml-capacity-block/helm.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
################################################################################
# Helm charts
################################################################################

resource "helm_release" "nvidia_device_plugin" {
name = "nvidia-device-plugin"
repository = "https://nvidia.github.io/k8s-device-plugin"
chart = "nvidia-device-plugin"
version = "0.14.5"
namespace = "nvidia-device-plugin"
create_namespace = true
wait = false

values = [
<<-EOT
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: 'nvidia.com/gpu.present'
operator: In
values:
- 'true'
EOT
]
}

resource "helm_release" "aws_efa_device_plugin" {
name = "aws-efa-k8s-device-plugin"
repository = "https://aws.github.io/eks-charts"
chart = "aws-efa-k8s-device-plugin"
version = "v0.4.4"
namespace = "kube-system"
wait = false

values = [
<<-EOT
nodeSelector:
vpc.amazonaws.com/efa.present: 'true'
tolerations:
- key: nvidia.com/gpu
operator: Exists
effect: NoSchedule
EOT
]
}
9 changes: 9 additions & 0 deletions patterns/ml-capacity-block/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,15 @@ locals {
}
}

################################################################################
# Output
################################################################################

output "configure_kubectl" {
description = "Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
value = "aws eks --region ${local.region} update-kubeconfig --name ${module.eks.cluster_name}"
}

################################################################################
# Supporting Resources
################################################################################
Expand Down
6 changes: 5 additions & 1 deletion patterns/nvidia-gpu-efa/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,14 @@ The following components are demonstrated in this pattern:

## Code

```terraform hl_lines="23-25 31-68"
```terraform hl_lines="24-26 32-67"
{% include "../../patterns/nvidia-gpu-efa/eks.tf" %}
```

```terraform hl_lines="5-47"
{% include "../../patterns/nvidia-gpu-efa/helm.tf" %}
```

## Deploy

See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern.
Expand Down
69 changes: 4 additions & 65 deletions patterns/nvidia-gpu-efa/eks.tf
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,10 @@ module "eks" {
cluster_endpoint_public_access = true

cluster_addons = {
coredns = {}
kube-proxy = {}
vpc-cni = {}
coredns = {}
eks-pod-identity-agent = {}
kube-proxy = {}
vpc-cni = {}
}

# Add security group rules on the node group security group to
Expand All @@ -35,8 +36,6 @@ module "eks" {
instance_types = ["p5.48xlarge"]

pre_bootstrap_user_data = <<-EOT
#!/usr/bin/env bash

# Mount instance store volumes in RAID-0 for kubelet and containerd
# https://github.com/awslabs/amazon-eks-ami/blob/master/doc/USER_GUIDE.md#raid-0-for-kubelet-and-containerd-raid0
/bin/setup-local-disks raid0
Expand Down Expand Up @@ -71,18 +70,6 @@ module "eks" {
default = {
instance_types = ["m5.large"]

# Default AMI has only 8GB of storage
block_device_mappings = {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not strictly required for this architecture so removing to reduce "noise"

xvda = {
device_name = "/dev/xvda"
ebs = {
volume_size = 128
volume_type = "gp3"
delete_on_termination = true
}
}
}

min_size = 1
max_size = 2
desired_size = 2
Expand All @@ -91,51 +78,3 @@ module "eks" {

tags = local.tags
}

################################################################################
# Helm charts
################################################################################

resource "helm_release" "nvidia_device_plugin" {
name = "nvidia-device-plugin"
repository = "https://nvidia.github.io/k8s-device-plugin"
chart = "nvidia-device-plugin"
version = "0.14.5"
namespace = "nvidia-device-plugin"
create_namespace = true
wait = false

values = [
<<-EOT
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: 'nvidia.com/gpu.present'
operator: In
values:
- 'true'
EOT
]
}

resource "helm_release" "aws_efa_device_plugin" {
name = "aws-efa-k8s-device-plugin"
repository = "https://aws.github.io/eks-charts"
chart = "aws-efa-k8s-device-plugin"
version = "v0.4.4"
namespace = "kube-system"
wait = false

values = [
<<-EOT
nodeSelector:
vpc.amazonaws.com/efa.present: 'true'
tolerations:
- key: nvidia.com/gpu
operator: Exists
effect: NoSchedule
EOT
]
}
47 changes: 47 additions & 0 deletions patterns/nvidia-gpu-efa/helm.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
################################################################################
# Helm charts
################################################################################

resource "helm_release" "nvidia_device_plugin" {
name = "nvidia-device-plugin"
repository = "https://nvidia.github.io/k8s-device-plugin"
chart = "nvidia-device-plugin"
version = "0.14.5"
namespace = "nvidia-device-plugin"
create_namespace = true
wait = false

values = [
<<-EOT
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: 'nvidia.com/gpu.present'
operator: In
values:
- 'true'
EOT
]
}

resource "helm_release" "aws_efa_device_plugin" {
name = "aws-efa-k8s-device-plugin"
repository = "https://aws.github.io/eks-charts"
chart = "aws-efa-k8s-device-plugin"
version = "v0.4.4"
namespace = "kube-system"
wait = false

values = [
<<-EOT
nodeSelector:
vpc.amazonaws.com/efa.present: 'true'
tolerations:
- key: nvidia.com/gpu
operator: Exists
effect: NoSchedule
EOT
]
}
9 changes: 9 additions & 0 deletions patterns/nvidia-gpu-efa/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,15 @@ locals {
}
}

################################################################################
# Output
################################################################################

output "configure_kubectl" {
description = "Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
value = "aws eks --region ${local.region} update-kubeconfig --name ${module.eks.cluster_name}"
}

################################################################################
# Supporting Resources
################################################################################
Expand Down
2 changes: 1 addition & 1 deletion patterns/targeted-odcr/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ This pattern demonstrates how to consume/utilize on-demand capacity reservations

## Code

```terraform hl_lines="34-51"
```terraform hl_lines="5-8 81-88 108-131"
{% include "../../patterns/targeted-odcr/eks.tf" %}
```

Expand Down
Loading
Loading