Skip to content

Commit

Permalink
feat: Add pattern to demonstrate how to cache large/ML container imag…
Browse files Browse the repository at this point in the history
…es to reduce time to start pods (#2010)
  • Loading branch information
bryantbiggs authored Oct 2, 2024
1 parent 37e2ad9 commit db5c59c
Show file tree
Hide file tree
Showing 13 changed files with 431 additions and 6 deletions.
21 changes: 17 additions & 4 deletions .github/scripts/mkdocs-hooks.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,23 +10,36 @@ def on_page_markdown(markdown, **kwargs):

def on_files(files, config, **kwargs):
# Add targeted-odcr screenshots to the generated build
path = 'patterns/targeted-odcr/assets/'
for odcr_file in [1, 2]:
files.append(
File(
src_dir='./patterns/targeted-odcr/assets/',
dest_dir=os.path.join(config.site_dir, 'patterns/targeted-odcr/assets/'),
src_dir=f'./{path}',
dest_dir=os.path.join(config.site_dir, path),
path=f'odcr-screenshot{odcr_file}.png',
use_directory_urls=True
)
)

path = 'patterns/kubecost/assets/'
files.append(
File(
src_dir='./patterns/kubecost/assets/',
dest_dir=os.path.join(config.site_dir, 'patterns/kubecost/assets/'),
src_dir=f'./{path}',
dest_dir=os.path.join(config.site_dir, path),
path='screenshot.png',
use_directory_urls=True
)
)

for svg in ['cached.svg', 'uncached.svg', 'state-machine.png']:
files.append(
File(
src_dir=f'./patterns/ml-container-cache/assets/',
dest_dir=os.path.join(config.site_dir, 'patterns/machine-learning/ml-container-cache/assets/'),
path=svg,
use_directory_urls=True
)
)


return files
4 changes: 2 additions & 2 deletions .github/workflows/pre-commit.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,9 @@ on:

env:
TERRAFORM_VERSION: 1.3.10
TERRAFORM_DOCS_VERSION: v0.16.0
TERRAFORM_DOCS_VERSION: v0.19.0
TFLINT_VERSION: v0.53.0
TF_PLUGIN_CACHE_DIR: ${{ github.workspace }}/.terraform.d/plugin-cache
TFLINT_VERSION: v0.50.2

concurrency:
group: '${{ github.workflow }} @ ${{ github.event.pull_request.head.label || github.head_ref || github.ref }}'
Expand Down
7 changes: 7 additions & 0 deletions docs/patterns/machine-learning/ml-container-cache.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
title: ML Container Cache
---

{%
include-markdown "../../../patterns/ml-container-cache/README.md"
%}
105 changes: 105 additions & 0 deletions patterns/ml-container-cache/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
# EKS Cluster w/ Cached ML Images

This pattern demonstrates how to cache images on an EBS volume snapshot that will be used by nodes in an EKS cluster. The solution is comprised of primarily of the following components:

1. An AWS Step Function implementation has been provided that demonstrates an example process for creating EBS volume snapshots that are pre-populated with the select container images. As part of this process, EBS Fast Snapshot Restore is enabled by default for the snapshots created to avoid the [EBS volume initialization time penalty](https://aws.amazon.com/blogs/storage/addressing-i-o-latency-when-restoring-amazon-ebs-volumes-from-ebs-snapshots/). The Step Function state machine diagram is captured below for reference.
2. The node group demonstrates how to mount the generated EBS volume snapshot at the `/var/lib/containerd` location in order for containerd to utilize the pre-populated images. The snapshot ID is referenced via an SSM parameter data source which was populated by the Step Function cache builder; any new images created by the cache builder will automatically update the SSM parameter used by the node group.

The main benefit of caching, or pre-pulling, container images onto an EBS volume snapshot is faster time to start pods/containers on new nodes, especially for larger (multi-gigabyte) images that are common with machine-learning workloads. This process avoids the time and resources it takes to pull and un-pack container images from remote registries. Instead, those images are already present in the location that containerd expects, allowing for faster pod startup times.

### Cache Builder State Machine

<p align="center">
<img src="assets/state-machine.png" alt="cached builder state machine" >
</p>

## Results

The following results use the PyTorch [nvcr.io/nvidia/pytorch:24.08-py3](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch/tags) image which is 9.5 GB compressed and 20.4 GB decompressed on disk.

Pod start up time duration is captured via pod events using [ktime](https://github.com/clowdhaus/ktime).

### Cached

With the PyTorch image already present on the EBS volume, the pod starts up in less than 5 seconds:

<p align="center">
<img src="assets/cached.svg" alt="cached image startup time" width="80%">
</p>

### Uncached

When the PyTorch image is not present on the EBS volume, it takes roughly 6 minutes (334 seconds in the capture below) for the image to be pulled, unpacked, and the pod to start.

<p align="center">
<img src="assets/uncached.svg" alt="uncached image startup time" width="80%">
</p>

## Code

### Cache Builder

```terraform hl_lines="7-11 13-14"
{% include "../../patterns/ml-container-cache/cache_builder.tf" %}
```

### Cluster

```terraform hl_lines="5-9 52-64 66-78"
{% include "../../patterns/ml-container-cache/eks.tf" %}
```

## Deploy

See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern.

1. First, deploy the Step Function state machine that will create the EBS volume snapshots with the cached images.

```sh
terraform init
terraform apply -target=module.ebs_snapshot_builder -target=module.vpc --auto-approve
```

2. Once the cache builder resources have been provisioned, execute the state machine by either navigating to the state machine within the AWS console and clicking `Start execution` (with the defaults or by passing in values to override the default values), or by using the provided output from the Terraform output value `start_execution_command` to start the state machine using the awscli. For example, the output looks similar to the following:

```hcl
start_execution_command = <<EOT
aws stepfunctions start-execution \
--region us-west-2 \
--state-machine-arn arn:aws:states:us-west-2:111111111111:stateMachine:cache-builder \
--input "{\"SnapshotDescription\":\"ML container image cache\",\"SnapshotName\":\"ml-container-cache\"}"
EOT
```
3. Once the state machine execution has completed successfully and created an EBS snapshot volume, provision the cluster and node group that will utilize the cached images.
```sh
terraform apply --auto-approve
```
4. Once the EKS cluster and node group have been provisioned, you can deploy the provided example pod that will use a cached image to verify the time it takes for the pod to reach a ready state.
```sh
kubectl apply -f pod-cached.yaml
```
You can contrast this with the time it takes for a pod that is not cached on a node by using the provided `pod-uncached.yaml` file. This works by simply using a pod that doesn't have a toleration for nodes that contain NVIDIA GPUs, which is where the cached images are provided in this example.
```sh
kubectl apply -f pod-uncached.yaml
```
You can also do the same steps above but using the small, utility CLI [ktime](https://github.com/clowdhaus/ktime) which can either collect the pod events to measure the time duration to reach a ready state, or it can deploy a pod manifest and return the same:
```sh
ktime apply -f pod-cached.yaml
-- or --
ktime apply -f pod-uncached.yaml
```
## Destroy
```sh
terraform destroy --auto-approve
```
1 change: 1 addition & 0 deletions patterns/ml-container-cache/assets/cached.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions patterns/ml-container-cache/assets/uncached.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
20 changes: 20 additions & 0 deletions patterns/ml-container-cache/cache_builder.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
module "ebs_snapshot_builder" {
source = "clowdhaus/ebs-snapshot-builder/aws"
version = "~> 1.1"

name = local.name

# Images to cache
public_images = [
"nvcr.io/nvidia/k8s-device-plugin:v0.16.2", # 120 MB compressed / 351 MB decompressed
"nvcr.io/nvidia/pytorch:24.08-py3", # 9.5 GB compressed / 20.4 GB decompressed
]

# AZs where EBS fast snapshot restore will be enabled
fsr_availability_zone_names = local.azs

vpc_id = module.vpc.vpc_id
subnet_id = element(module.vpc.private_subnets, 0)

tags = local.tags
}
118 changes: 118 additions & 0 deletions patterns/ml-container-cache/eks.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,118 @@
locals {
dev_name = "xvdb"
}

# SSM parameter where the `cache-builder` stores the generated snapshot ID
# This will be used to reference the snapshot when creating the EKS node group
data "aws_ssm_parameter" "snapshot_id" {
name = module.ebs_snapshot_builder.ssm_parameter_name
}

################################################################################
# Cluster
################################################################################

module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.24"

cluster_name = local.name
cluster_version = "1.31"

# Give the Terraform identity admin access to the cluster
# which will allow it to deploy resources into the cluster
enable_cluster_creator_admin_permissions = true
cluster_endpoint_public_access = true

cluster_addons = {
coredns = {}
eks-pod-identity-agent = {}
kube-proxy = {}
vpc-cni = {}
}

vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets

eks_managed_node_group_defaults = {
ebs_optimized = true
}

eks_managed_node_groups = {
gpu = {
# The EKS AL2 GPU AMI provides all of the necessary components
# for accelerated workloads w/ EFA
ami_type = "AL2_x86_64_GPU"
instance_types = ["g6e.xlarge"]

min_size = 1
max_size = 1
desired_size = 1

pre_bootstrap_user_data = <<-EOT
# Mount the second volume for containerd persistent data
# This volume contains the cached images and layers
systemctl stop containerd kubelet
rm -rf /var/lib/containerd/*
echo '/dev/${local.dev_name} /var/lib/containerd xfs defaults 0 0' >> /etc/fstab
mount -a
systemctl restart containerd kubelet
EOT

# Mount a second volume for containerd persistent data
# using the snapshot that contains the cached images and layers
block_device_mappings = {
(local.dev_name) = {
device_name = "/dev/${local.dev_name}"
ebs = {
# Snapshot ID from the cache builder
snapshot_id = nonsensitive(data.aws_ssm_parameter.snapshot_id.value)
volume_size = 64
volume_type = "gp3"
}
}
}

labels = {
"nvidia.com/gpu.present" = "true"
"ml-container-cache" = "true"
}

taints = {
# Ensure only GPU workloads are scheduled on this node group
gpu = {
key = "nvidia.com/gpu"
value = "true"
effect = "NO_SCHEDULE"
}
}
}

# This node group is for core addons such as CoreDNS
default = {
instance_types = ["m5.large"]

min_size = 1
max_size = 2
desired_size = 2

# Not required - increased to demonstrate pulling the un-cached
# image since the default volume size is too small for the image used
block_device_mappings = {
"xvda" = {
device_name = "/dev/xvda"
ebs = {
volume_size = 64
volume_type = "gp3"
}
}
}
}
}

tags = local.tags
}
27 changes: 27 additions & 0 deletions patterns/ml-container-cache/helm.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
################################################################################
# Helm charts
################################################################################

resource "helm_release" "nvidia_device_plugin" {
name = "nvidia-device-plugin"
repository = "https://nvidia.github.io/k8s-device-plugin"
chart = "nvidia-device-plugin"
version = "0.16.2" # Matches image that is cached
namespace = "nvidia-device-plugin"
create_namespace = true
wait = false

values = [
<<-EOT
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: 'nvidia.com/gpu.present'
operator: In
values:
- 'true'
EOT
]
}
Loading

0 comments on commit db5c59c

Please sign in to comment.