Skip to content

Latest commit

 

History

History
153 lines (108 loc) · 4.93 KB

dev.md

File metadata and controls

153 lines (108 loc) · 4.93 KB

HMC installation for development

Below is the example on how to install HMC for development purposes and create a managed cluster on AWS with k0s for testing. The kind cluster acts as management in this example.

Prerequisites

Clone HMC repository

git clone https://github.com/Mirantis/hmc.git && cd hmc

Install required CLIs

Run:

make cli-install

AWS Provider Setup

Follow the instruction to configure AWS Provider: AWS Provider Setup

The following env variables must be set in order to deploy dev cluster on AWS:

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY

Azure Provider Setup

Follow the instruction on how to configure Azure Provider.

Additionally to deploy dev cluster on Azure the following env variables should be set before running deployment:

  • AZURE_SUBSCRIPTION_ID - Subscription ID
  • AZURE_TENANT_ID - Service principal tenant ID
  • AZURE_CLIENT_ID - Service principal App ID
  • AZURE_CLIENT_SECRET - Service principal password

More detailed description of these parameters can be found here.

vSphere Provider Setup

Follow the instruction on how to configure vSphere Provider.

To properly deploy dev cluster you need to have the following variables set:

  • VSPHERE_USER
  • VSPHERE_PASSWORD
  • VSPHERE_SERVER
  • VSPHERE_THUMBPRINT
  • VSPHERE_DATACENTER
  • VSPHERE_DATASTORE
  • VSPHERE_RESOURCEPOOL
  • VSPHERE_FOLDER
  • VSPHERE_CONTROL_PLANE_ENDPOINT
  • VSPHERE_VM_TEMPLATE
  • VSPHERE_NETWORK
  • VSPHERE_SSH_KEY

Naming of the variables duplicates parameters in ManagementCluster. To get full explanation for each parameter visit vSphere cluster parameters and vSphere machine parameters.

EKS Provider Setup

To properly deploy dev cluster you need to have the following variable set:

  • DEV_PROVIDER - should be "eks"

The rest of deployment procedure is the same as for other providers.

Deploy HMC

Default provider which will be used to deploy cluster is AWS, if you want to use another provider change DEV_PROVIDER variable with the name of provider before running make (e.g. export DEV_PROVIDER=azure).

  1. Configure your cluster parameters in provider specific file (for example config/dev/aws-managedcluster.yaml in case of AWS):

    • Configure the name of the ManagedCluster
    • Change instance type or size for control plane and worker machines
    • Specify the number of control plane and worker machines, etc
  2. Run make dev-apply to deploy and configure management cluster.

  3. Wait a couple of minutes for management components to be up and running.

  4. Apply credentials for your provider by executing make dev-creds-apply.

  5. Run make dev-mcluster-apply to deploy managed cluster on provider of your choice with default configuration.

  6. Wait for infrastructure to be provisioned and the cluster to be deployed. You may watch the process with the ./bin/clusterctl describe command. Example:

export KUBECONFIG=~/.kube/config

./bin/clusterctl describe cluster <managedcluster-name> -n hmc-system --show-conditions all

Note

If you encounter any errors in the output of clusterctl describe cluster inspect the logs of the capa-controller-manager with:

kubectl logs -n hmc-system deploy/capa-controller-manager

This may help identify any potential issues with deployment of the AWS infrastructure.

  1. Retrieve the kubeconfig of your managed cluster:
kubectl --kubeconfig ~/.kube/config get secret -n hmc-system <managedcluster-name>-kubeconfig -o=jsonpath={.data.value} | base64 -d > kubeconfig

Running E2E tests locally

E2E tests can be ran locally via the make test-e2e target. In order to have CI properly deploy a non-local registry will need to be used and the Helm charts and hmc-controller image will need to exist on the registry, for example, using GHCR:

IMG="ghcr.io/mirantis/hmc/controller-ci:v0.0.1-179-ga5bdf29" \
    REGISTRY_REPO="oci://ghcr.io/mirantis/hmc/charts-ci" \
    make test-e2e

Optionally, the NO_CLEANUP=1 env var can be used to disable After nodes from running within some specs, this will allow users to debug tests by re-running them without the need to wait a while for an infrastructure deployment to occur. For subsequent runs the MANAGED_CLUSTER_NAME=<cluster name> env var should be passed to tell the test what cluster name to use so that it does not try to generate a new name and deploy a new cluster.

Tests that run locally use autogenerated names like 12345678-e2e-test while tests that run in CI use names such as ci-1234567890-e2e-test. You can always pass MANAGED_CLUSTER_NAME= from the get-go to customize the name used by the test.

Nuke created resources

In CI we run make dev-aws-nuke to cleanup test resources, you can do so manually with:

CLUSTER_NAME=example-e2e-test make dev-aws-nuke