diff --git a/manifests/modules/fundamentals/mng/.workshop/cleanup.sh b/manifests/modules/fundamentals/mng/.workshop/cleanup.sh
index edac0d30c..faf1ac390 100644
--- a/manifests/modules/fundamentals/mng/.workshop/cleanup.sh
+++ b/manifests/modules/fundamentals/mng/.workshop/cleanup.sh
@@ -8,4 +8,13 @@ if [ ! -z "$taint_result" ]; then
echo "Deleting taint node group..."
eksctl delete nodegroup taint-mng --cluster $EKS_CLUSTER_NAME --wait > /dev/null
+fi
+
+spot_nodegroup=$(aws eks list-nodegroups --cluster-name $EKS_CLUSTER_NAME --query "nodegroups[? @ == 'managed-spot']" --output text)
+
+if [ ! -z "$spot_nodegroup" ]; then
+ echo "Deleting managed-spot node group..."
+
+ aws eks delete-nodegroup --region $AWS_REGION --cluster-name $EKS_CLUSTER_NAME --nodegroup-name managed-spot > /dev/null
+ aws eks wait nodegroup-deleted --cluster-name $EKS_CLUSTER_NAME --nodegroup-name managed-spot > /dev/null
fi
\ No newline at end of file
diff --git a/website/docs/fundamentals/managed-node-groups/spot/index.md b/website/docs/fundamentals/managed-node-groups/spot/index.md
index cea39e1d6..a3d2aa3a4 100644
--- a/website/docs/fundamentals/managed-node-groups/spot/index.md
+++ b/website/docs/fundamentals/managed-node-groups/spot/index.md
@@ -13,25 +13,18 @@ In this lab exercise, we'll look at how we can provision Spot capacity for our E
# EKS managed node groups with Spot capacity
-Amazon EKS managed node groups with Spot capacity enhances the managed node group experience with ease to provision and manage EC2 Spot Instances. EKS managed node groups launch an EC2 Auto Scaling group with Spot best practices and handle Spot Instance interruptions automatically. This enables you to take advantage of the steep savings that Spot Instances provide for your interruption tolerant containerized applications. In addition, EKS managed node groups with Spot capacity have the following advantages:
+In this module, we will first deploy a managed node group that creates Spot instances, followed by modifying the existing `catalog` component of our application to run on the newly created Spot instances.
-* The allocation strategy to provision Spot capacity is set to "capacity-optimized" to ensure that your Spot nodes are provisioned in the optimal Spot capacity pools. To increase the number of Spot capacity pools available for allocating capacity from, configure a managed node group to use multiple instance types.
-* Specify multiple instance types during managed node groups creation, to increase the number of Spot capacity pools available for allocating capacity.
-* Nodes provisioned under managed node groups with Spot capacity are automatically tagged with capacity type: `eks.amazonaws.com/capacityType: SPOT`. You can use this label to schedule fault tolerant applications on Spot nodes.
-* Amazon EC2 Spot Capacity Rebalancing enabled to ensure Amazon EKS can gracefully drain and rebalance your Spot nodes to minimize application disruption when a Spot node is at elevated risk of interruption.
+Let’s get started by listing all of the nodes in our existing EKS Cluster. The `kubectl get nodes` command can be used to list the nodes in your Kubernetes cluster, but to get additional detail about the capacity type, we'll use the `-L eks.amazonaws.com/capacityType` parameter.
-
-Let’s get started by listing all of the nodes in our existing EKS Cluster. The `kubectl get nodes` command can be used to list the nodes in your Kubernetes cluster. To include additional labels such as `eks.amazonaws.com/capacityType` and `eks.amazonaws.com/nodegroup`, You can use “-L, —label-columns”.
-
-The following output provides a list of available nodes that are exclusively on-demand instances.
+The following command shows that our nodes are currently **On-Demand** instances.
```bash
-$ kubectl get nodes -L eks.amazonaws.com/capacityType,eks.amazonaws.com/nodegroup
-
-NAME STATUS ROLES AGE VERSION CAPACITYTYPE NODEGROUP
-ip-10-42-10-232.us-west-2.compute.internal Ready 113m v1.23.15-eks-49d8fe8 ON_DEMAND managed-system-20230605211737831800000026
-ip-10-42-10-96.us-west-2.compute.internal Ready 113m v1.23.15-eks-49d8fe8 ON_DEMAND managed-ondemand-20230605211738568600000028
-ip-10-42-12-45.us-west-2.compute.internal Ready 113m v1.23.15-eks-49d8fe8 ON_DEMAND managed-ondemand-20230605211738568600000028
+$ kubectl get nodes -L eks.amazonaws.com/capacityType
+NAME STATUS ROLES AGE VERSION CAPACITYTYPE
+ip-10-42-103-103.us-east-2.compute.internal Ready 133m v1.25.6-eks-48e63af ON_DEMAND
+ip-10-42-142-197.us-east-2.compute.internal Ready 133m v1.25.6-eks-48e63af ON_DEMAND
+ip-10-42-161-44.us-east-2.compute.internal Ready 133m v1.25.6-eks-48e63af ON_DEMAND
```
:::tip
@@ -53,51 +46,35 @@ In the below diagram, there are two separate "node groups" representing the mana
![spot arch](../assets/managed-spot-arch.png)
-As our existing cluster already has a nodegroup with `On-Demand` instances, the next step would be to setup a node group which has EC2 instances with capacity type as `SPOT`.
-
-To achieve that, we will perform the following steps: first, Export the environment variable EKS_DEFAULT_MNG_NAME_SPOT with the value set as 'managed-spot', and then use the AWS CLI to create an EKS managed node group specifically designed for `SPOT` instances.
+Let's create a node group with Spot instances. The following command executes two steps:
+1. Set an environment variable with the same node role we used for the `default` node group.
+1. Create a new node group `managed-spot` with our existing node role and subnets, and specify the instance types, capacity type, and scaling config for our new spot node group.
```bash
-
-$ export EKS_DEFAULT_MNG_NAME_SPOT=managed-spot
$ export MANAGED_NODE_GROUP_IAM_ROLE_ARN=`aws eks describe-nodegroup --cluster-name eks-workshop --nodegroup-name default | jq -r .nodegroup.nodeRole`
$ aws eks create-nodegroup \
--cluster-name $EKS_CLUSTER_NAME \
---nodegroup-name $EKS_DEFAULT_MNG_NAME_SPOT \
+--nodegroup-name managed-spot \
--node-role $MANAGED_NODE_GROUP_IAM_ROLE_ARN \
--subnets $PRIMARY_SUBNET_1 $PRIMARY_SUBNET_2 $PRIMARY_SUBNET_3 \
--instance-types m5.large m5d.large m5a.large m5ad.large m5n.large m5dn.large \
--capacity-type SPOT \
---scaling-config minSize=2,maxSize=6,desiredSize=2 \
+--scaling-config minSize=2,maxSize=3,desiredSize=2 \
--disk-size 20 \
--labels capacity_type=managed_spot
-
```
-To track the status of Node Group creation, Run below command in a separate terminal.
-
-```bash
-$ eksctl get nodegroup --cluster=$EKS_CLUSTER_NAME
-
-CLUSTER NODEGROUP STATUS CREATED MIN SIZE MAX SIZE DESIRED CAPACITY INSTANCE TYPE IMAGE ID ASG NAME TYPE
-eks-workshop managed-ondemand-20230605211832165500000026 ACTIVE 2023-06-05T21:18:33Z 2 6 2 m5.large AL2_x86_64 eks-managed-ondemand-20230605211832165500000026-b2c446b6-828d-f79f-9338-456374559c7b managed
-eks-workshop managed-ondemand-tainted-20230605211832655900000028 ACTIVE 2023-06-05T21:18:34Z 0 1 0 m5.large AL2_x86_64 eks-managed-ondemand-tainted-20230605211832655900000028-84c446b6-837c-bf91-2e90-93ee1ec37cf8 managed
-eks-workshop managed-spot CREATING 2023-06-06T05:24:55Z 2 6 2 m5.large,m5d.large,m5a.large,m5ad.large,m5n.large,m5dn.large AL2_x86_64 managed
-eks-workshop managed-system-20230605211832120700000024 ACTIVE 2023-06-05T21:18:34Z 1 2 1 m5.large AL2_x86_64 eks-managed-system-20230605211832120700000024-26c446b6-8271-ac8a-4b54-569cf51913f9 managed
-```
-
-:::info
+:::tip
The aws `eks wait nodegroup-active` command can be used to wait until a specific EKS node group is active and ready for use. This command is part of the AWS CLI and can be used to ensure that the specified node group has been successfully created and all the associated instances are running and ready.
```bash
$ aws eks wait nodegroup-active \
--cluster-name $EKS_CLUSTER_NAME \
---nodegroup-name $EKS_DEFAULT_MNG_NAME_SPOT
+--nodegroup-name managed-spot
```
:::
-Once the Managed node group `managed-spot` status shows as “Active”, Run the below command.
-The output shows that two additional nodes got provisioned under the node group `managed-spot` with capacity type as `SPOT`.
+Once our new managed node group is **Active**, run the following command.
```bash
$ kubectl get nodes -L eks.amazonaws.com/capacityType,eks.amazonaws.com/nodegroup
@@ -110,43 +87,54 @@ ip-10-42-12-234.us-west-2.compute.internal Ready 77s v1.23.17-e
ip-10-42-12-45.us-west-2.compute.internal Ready 113m v1.23.15-eks-49d8fe8 ON_DEMAND managed-ondemand-20230605211738568600000028
```
-The above output indicates the availability of two managed node groups. To deploy the “Sample Retail Store” app and utilize `nodeSelector` for deploying it on spot instances instead of `On-Demand`, you can employ the `nodeSelector` field to define constraints based on node labels. As the existing deployment.yaml manifest in `/workspace/manifests/catalog` lacks the `nodeSelector` attribute, you can use kustomize to modify the resource configuration without directly altering the original manifests.
+
+The output shows that two additional nodes got provisioned under the node group `managed-spot` with capacity type as `SPOT`.
+
+Next, let's modify our sample retail store application to run the catalog component on the newly created Spot instances. To do so, we'll utilize Kustomize to apply a patch to the `catalog` Deployment, adding a `nodeSelector` field with `capacity_type: managed_spot`.
```kustomization
modules/fundamentals/mng/spot/deployment.yaml
Deployment/catalog
```
-Next, Deploy the app.
+Apply the Kustomize patch with the following command.
```bash
$ kubectl apply -k ~/environment/eks-workshop/modules/fundamentals/mng/spot
-namespace/catalog created
-serviceaccount/catalog created
-configmap/catalog created
-secret/catalog-db created
-service/catalog created
-service/catalog-mysql created
-deployment.apps/catalog created
-statefulset.apps/catalog-mysql created
+namespace/catalog unchanged
+serviceaccount/catalog unchanged
+configmap/catalog unchanged
+secret/catalog-db unchanged
+service/catalog unchanged
+service/catalog-mysql unchanged
+deployment.apps/catalog configured
+statefulset.apps/catalog-mysql unchanged
```
-To ensure the successful deployment of your app, you can utilize the kubectl rollout status command, which allows you to check the status of a deployment in Kubernetes. This command provides detailed information regarding the progress of the rollout and the current state of the associated resources, enabling you to verify the app's deployment status.
+Ensure the successful deployment of your app with the following command.
```bash
$ kubectl rollout status deployment/catalog -n catalog --timeout=5m
```
-After successfully deploying the application, the final step is to verify if it has been deployed on the `SPOT` instances. To accomplish this, run the below command.
+
+Finally, let's verify that the catalog pods are running on Spot instances. Run the following two commands.
```bash
-$ kubectl get pod -n $CATALOG_RDS_DATABASE_NAME -o=custom-columns=NAME:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName
+$ kubectl get pods -l app.kubernetes.io/component=service -n catalog -o wide
+
+NAME READY STATUS RESTARTS AGE IP NODE
+catalog-6bf46b9654-9klmd 1/1 Running 0 7m13s 10.42.118.208 ip-10-42-99-254.us-east-2.compute.internal
```
-The output of the command will display the details of the pods, including the node that is identified as a SPOT instance.
+```bash
+$ kubectl get nodes -l eks.amazonaws.com/capacityType=SPOT
+
+NAME STATUS ROLES AGE VERSION
+ip-10-42-139-140.us-east-2.compute.internal Ready 16m v1.25.13-eks-43840fb
+ip-10-42-99-254.us-east-2.compute.internal Ready 16m v1.25.13-eks-43840fb
+
```
-Output:
-NAME STATUS NODE
-catalog-5c48f886c-rrbck Running ip-10-42-11-17.us-west-2.compute.internal
-catalog-mysql-0 Running ip-10-42-12-234.us-west-2.compute.internal
-```
\ No newline at end of file
+The first command tells us that the catalog pod is running on node `ip-10-42-99-254.us-east-2.compute.internal`, which we verify is a Spot instance by matching it to the output of the second command.
+
+In this lab, you deployed a managed node group that creates Spot instances, and then modified the `catalog` deployment to run on the newly created Spot instances. Following this process, you can modify any of the running deployments in the cluster by adding the `nodeSelector` parameter, as specified in the Kustomization patch above.
\ No newline at end of file