This repository provides Terraform scripts and configuration files to set up a demo environment for the Juniper Cloud-Native Router (JCNR) on AWS. It sets up AWS resources and configures JCNR in both east and west VPCs.
Table of Contents
- AWS Resources Created
- Directory Structure
- Before start
- Prerequisites
- Demo Topology
- Setup Guide
- 1. Clone the Repository
- 2. Install Necessary Tools
- 3. AWS Configuration
- 4. Terraform Initialization and Apply
- 5. Labeling EKS Worker Node
- 6. Setting up JCNR Secrets
- Optionally: Simplified Configuration of Node Labels and Secrets using
setup.sh
- 7. AWS Marketplace Subscription for JCNR
- 8. Helm Setup for JCNR
- 9. Install JCNR with Helm
- 10. Configure JCNR and Add workloads
- Important Configuration Consistency Note
- Resource Cleanup
- VPC and associated subnets.
- VPC Peering between the VPCs.
- EKS cluster with a single worker node.
- Additional ENI interfaces on EKS node for JCNR data plane.
- EC2 instances acting as L3VPN CE devices.
- SSH key-pair for accessing EC2 instances.
- Multus CNI driver for Kubernetes.
- EBS CSI driver for Kubernetes.
- DPDK environment setup DaemonSet in the worker node.
- Kube config updated to incorporate the newly created EKS cluster.
- Local
~/.ssh/config
updated for direct SSH access to EC2 instances running a CE workload.
.
├── config-east/
│ ├── charts/ # JCNR Helm chart variables for the east VPC
│ ├── config/ # JCNR and workloads configuration for east VPC
│ └── tf/ # Terraform variables for east VPC
├── config-west/
│ ├── charts/ # JCNR Helm chart variables for the west VPC
│ ├── config/ # JCNR and workloads configuration for west VPC
│ └── tf/ # Terraform variables for west VPC
├── tf-aws/ # Terraform scripts for AWS resources
├── secrets/ # K8s secrets manifest for JCNR and setup script
└── install-tools.sh # Script to install required tools
For a smooth deployment experience, we recommend utilizing two separate machines or virtual machines (VMs) as your setup environment. This ensures that there's no overlap or confusion between the two EKS clusters and their respective Terraform operations. While the guide is crafted for Ubuntu 22.04 as the primary setup machine, other Linux distributions such as CentOS or Rocky Linux should also be compatible. macOS users can adapt this guide, though there might be minor differences in some steps.
- An active AWS account to obtain the necessary AWS access token.
- Git installed on your setup machine.
- Basic knowledge of AWS, Kubernetes, and Terraform.
- Familiarity with Junos, JCNR, and L3VPN concepts.
git clone https://github.com/simonrho/jcnr-in-aws-demo.git
cd jcnr-in-aws-demo
Run the provided script to install the required tools:
./install-tools.sh
To configure the AWS Command Line Interface (CLI), you'll first need to obtain your Access Key ID and Secret Access Key from the AWS Management Console. Follow the steps below:
-
Sign in to the AWS Management Console.
-
Click on your username at the top right corner of the console.
-
From the drop-down menu, choose "My Security Credentials".
-
Under the "Access keys (access key ID and secret access key)" section, click on "Create New Access Key". This will generate a new set of credentials.
-
You'll see a pop-up window showing your newly created Access Key ID and Secret Access Key. Click "Download .csv" to save these credentials or note them down securely. Important: This is the only time you'll be able to view the Secret Access Key via the AWS Console. Ensure you store it securely.
Now that you have your AWS Access Key ID and Secret Access Key, you can configure AWS CLI:
aws configure
You'll be prompted to provide the following details:
- AWS Access Key ID: Enter the Access Key ID from the previously downloaded .csv or the one you noted down.
- AWS Secret Access Key: Enter the Secret Access Key.
- Default region name: Enter your preferred AWS region (e.g.,
us-east-2
). - Default output format: You can select
json
,yaml
,text
, or leave it blank for default.
Note: Always ensure you store your AWS credentials securely and avoid exposing them in any public or insecure locations.
Before running Terraform, copy the appropriate variables.tf
file from the east/west config directory to the tf-aws
directory, and then navigate to the tf-aws
directory:
For the East VPC:
cp config-east/tf/variables.tf tf-aws/
For the West VPC:
cp config-west/tf/variables.tf tf-aws/
Now, switch to the tf-aws directory and initialize Terraform:
cd tf-aws/
terraform init
Apply the Terraform configurations:
terraform apply
The JCNR deployment targets EKS worker nodes with a specific label. You can manually add this label using the following command:
kubectl label nodes $(kubectl get nodes -o json | jq -r .items[0].metadata.name) key1=jcnr --overwrite
Note on DPDK Environment Setup:
During the infrastructure provisioning process, Terraform is employed to automate the creation and configuration of the required AWS resources. One such resource is a daemonset service named dpdk-env-setup
. This service is designed to set up the DPDK running environment tailored for JCNR on your AWS Elastic Kubernetes Service (EKS) nodes.
The dpdk-env-setup
specifically targets worker nodes that are identified by a unique tag/label. If you wish to modify which nodes are targeted, you can adjust this tag/label specification directly in the Terraform configuration code (variables.tf
). Furthermore, this tag/label value has significance beyond just the DPDK setup; it's also referenced during the JCNR helm chart installation, as specified in the values.yaml
file within the JCNR helm charts.
Before you proceed with the installation of JCNR, it's crucial to configure the jcnr-secrets.yaml
with the required credentials. There are two approaches to achieve this: Manually and using the provided Assistant Tool.
- Enter the JCNR root password and your Juniper Cloud-Native Router license file into the
secrets/jcnr-secrets.yaml
file.
Sample contents of the jcnr-secrets.yaml
file:
---
apiVersion: v1
kind: Namespace
metadata:
name: jcnr
---
apiVersion: v1
kind: Secret
metadata:
name: jcnr-secrets
namespace: jcnr
data:
root-password: <add your password in base64 format>
crpd-license: |
<add your license in base64 format>
-
Encode the password and license in base64:
- For the password:
echo "YourPlainTextPassword" > rootPasswordFile
base64 -w 0 rootPasswordFile
- For the license:
base64 -w 0 licenseFile
-
Copy the base64 outputs and paste them into the
secrets/jcnr-secrets.yaml
file at the respective places. -
Apply the secrets to Kubernetes:
kubectl apply -f secrets/jcnr-secrets.yaml
NOTE: Without the proper base64-encoded license file and JCNR root password in the secrets.yaml
file, the cRPD Pod will remain in CrashLoopBackOff
state.
For a more streamlined approach, use the build-secrets.sh
script. Before you start, create two files: jcnr-root-password.txt
(JCNR root password) and jcnr-license.txt
(JCNR license). These files are user-provided and are not part of the git cloned files.
- Run the script:
./build-secrets.sh <path-to-root-password-file> <path-to-jcnr-license-file>
Example:
./build-secrets.sh jcnr-root-password.txt jcnr-license.txt
- After execution, the generated
jcnr-secrets.yaml
will be in the current directory. Verify with:
ls
cat jcnr-secrets.yaml
- Apply the secrets to Kubernetes:
kubectl apply -f jcnr-secrets.yaml
NOTE: Ensure your license file is obtained from your account team and integrated correctly. Otherwise, the cRPD Pod might face issues.
For those looking to simplify and automate the processes described in Sections 5 and 6, the provided setup.sh
script under the secrets
directory offers an all-in-one solution. This script serves two main purposes:
- JCNR Secrets Configuration: It automates the creation of the
jcnr-secrets.yaml
file, ensuring the JCNR secrets (license and root password) are appropriately set. - Labeling the EKS Worker Node: It ensures that the necessary label (used for targeting by the DPDK environment setup) is added to the EKS worker node.
To utilize this streamlined approach, follow the steps below:
cd ~/demo/secrets
- Execute the
setup.sh
script:
./setup.sh
Reading root password from jcnr-root-password.txt
Reading license key from jcnr-license.txt
Creating jcnr-secrets.yaml file
Applying JCNR secrets and namespace
namespace/jcnr unchanged
secret/jcnr-secrets configured
Enter label in format key=value (default is key1=jcnr):
Adding label to eks worker nodes
Upon execution, the script will:
- Create and apply the
jcnr-secrets.yaml
file with the JCNR secrets. - Add the
key1=jcnr
label to your EKS worker nodes, making them identifiable for the JCNR deployment.
NOTE: While the setup.sh
script offers convenience, it's essential to understand the underlying manual steps (as detailed in Sections 5 & 6) to troubleshoot potential issues or customize configurations further.
Before you can proceed with Helm setup and pull JCNR helm charts, you need to visit the AWS Marketplace and subscribe to the JCNR container product.
- Navigate to the AWS Marketplace.
- In the search bar, type "JCNR" and search.
- Click on the relevant product from the search results.
- Go through the product details and click on the "Subscribe" or "Continue to Subscribe" button.
- Complete the subscription process as prompted.
Note: Without this subscription, you won't have access to the JCNR helm charts and package images from the ECR (Elastic Container Registry). It's essential to ensure that the subscription is successful before proceeding further.
First, ensure that you are authenticated with AWS. Helm will use your AWS credentials to pull the JCNR helm charts from the AWS Marketplace.
Login to your AWS account via Helm:
export HELM_EXPERIMENTAL_OCI=1
aws ecr get-login-password \
--region us-east-1 | helm registry login \
--username AWS \
--password-stdin 709825985650.dkr.ecr.us-east-1.amazonaws.com
Now, pull and untar the JCNR helm charts:
helm pull oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/juniper-networks/jcnr --version 23.2.0
Untar the JCNR helm charts tar file:
tar zxvf jcnr-23.2.0.tgz
After successfully creating all AWS resources, install the JCNR with the helm charts downloaded from the AWS marketplace.
After successfully creating all AWS resources, install the JCNR with the helm charts downloaded from the AWS marketplace.
Use the values.yaml
from the appropriate charts directory of config-east/charts
or config-west/charts
.
Now, switch to the jcnr directory and install jcnr:
For the East VPC:
cd jcnr
cp ../config-east/charts/values.yaml ./values.yaml
For the West VPC:
cd jcnr
cp ../config-west/charts/values.yaml ./values.yaml
After setting the correct values, you can proceed with the JCNR installation using Helm:
helm install jcnr .
Wait for a few minutes for the JCNR pods and services to be deployed. Once done, you can check the status using:
helm ls
kubectl get pods -n jcnr
kubectl get pods -n contrail
Setting up the JCNR (Junos Cloud-Native Router) involves two primary tasks: configuring the JCNR router itself and adding the corresponding workloads. Workloads come in two flavors: Kubernetes pods that simulate CE (Customer Equipment) devices and EC2 instances.
- Setting up JCNR Configurations
kubectl exec -it -n jcnr kube-crpd-worker-sts-0 -c kube-crpd-worker -- bash
After the Juniper cRPD banner appears:
Containerized Routing Protocols Daemon (CRPD)
Copyright (C) 2020-2023, Juniper Networks, Inc. All rights reserved.
Access the Junos CLI with cli
, then enter the configuration mode using edit
.
root@ip-172-16-1-77:/# cli
[email protected]> edit
Within this mode, you can copy and paste the desired JCNR configurations directly from the respective .conf
files found within the config-east
or config-west
directories.
Informational: The specific hostname (ip-172-16-1-77
in the example) of your EKS node may differ depending on how AWS has provisioned your EKS cluster. Always verify the correct hostname of your EKS node when accessing the CLI.
- For Kubernetes Pods: Kubernetes configurations use
.yaml
files located in theconfig-east
andconfig-west
directories. When you deploy these configurations usingkubectl apply
, the system triggers the JCNR CNI driver. This driver dynamically builds the VRF configuration, adds it, and commits it to the cRPD of JCNR.
Deploy the Kubernetes workloads with:
kubectl apply -f config-east/config/red1.yaml
kubectl apply -f config-west/config/blue1.yaml
Note: The kubectl apply
command is a native Kubernetes approach to create a workload. Once it's executed, it activates the JCNR CNI driver, setting up the VRF configurations dynamically.
- For EC2 Instances: For workloads that utilize EC2 instances, the connection to JCNR happens through regular ENI interfaces & VPC subnets. In these scenarios, there's no JCNR CNI involvement. Thus, manual VRF configurations must be added to the JCNR, which are specified in the
red*.conf
andblue*.conf
files.
When deploying JCNR and setting up the DPDK environment on your EKS worker nodes, consistency across specific configurations is paramount. Ensure that:
- The
nodeAffinity
configuration invalues.yaml
located in bothconfig-east/charts
&config-west/charts
directories is set as:
nodeAffinity:
- key: key1
operator: In
values:
- jcnr
- The
node_selector
variable invariables.yaml
from theconfig-east/config
&config-west/config
folders aligns with:
variable "node_selector" {
description = "Node selector key-value for the Kubernetes DaemonSet adding DPDK env setup in target nodes"
type = map(string)
default = {
"key1" = "jcnr"
}
}
- The label added to your EKS worker nodes via the command
kubectl label nodes $(kubectl get nodes -o json | jq -r .items[0].metadata.name) "key1=jcnr" --overwrite
matches the above configurations.
Ensuring consistency across these configurations guarantees that the DPDK environment setup and JCNR installation target the intended EKS worker nodes. Inconsistencies can lead to deployment errors or undesired behavior.
To securely dismantle all AWS components and the JCNR deployment, follow these steps:
cd tf-aws/
terraform destroy
Should you encounter the Error: context deadline exceeded while removing AWS resources, simply execute terraform destroy
once more to ensure complete resource removal.