Skip to content

Latest commit

 

History

History
293 lines (245 loc) · 7.87 KB

README.md

File metadata and controls

293 lines (245 loc) · 7.87 KB

High available Kubernetes cluster on Hetzner Cloud with Autoscaling

Motivation

AWS has eksctl tool for creating kubernetes cluster - Hetzner Cloud have no official tool for creating kubernetes cluster. This tool will create new production ready kubernetes clusters on Hetzner Cloud with minimum user interaction. New cluster will be available in High availability mode with automatic cluster autoscaling and automatic volume creation

Preparation

  • login to https://console.hetzner.cloud and create new project
  • select project, select in menu Security -> API Tokens and create new "Read & Write" token
  • save token to .hcloudauth file in current directory

Install binnary

MacOS

brew install maksim-paskal/tap/hcloud-k8s-ctl

for other OS download binnary from release pages

Create kubernetes cluster

This will create kubernetes cluster in Hetzner Cloud Europe region with 3 instances, 1 load balancer for the kubernetes control plane and 1 kubernetes worker node, after successful installation the cluster will have:

for HA needs odd number of master nodes (minimum 3) https://etcd.io/docs/v3.4/faq/#why-an-odd-number-of-cluster-members

Create a simple configuration file config.yaml full configuration example here

# kubeconfig path
kubeConfigPath: ~/.kube/hcloud
# Hetzner cloud internal network CIDR
ipRange: "10.0.0.0/16"
# servers for kubernetes master (recommended 3)
# for development purposes cluster can have 1 master node  
# in this case cluster will be created without load balancer and pods can schedule on master
masterCount: 3

customize configuration file for your needs

# kubeconfig path
kubeConfigPath: ~/.kube/hcloud
# Hetzner cloud internal network CIDR
ipRange: "10.0.0.0/16"
# servers for kubernetes master (recommended 3)
# for development purposes cluster can have 1 master node  
# in this case cluster will be created without load balancer and pods can schedule on master
masterCount: 3
# server components for all nodes in cluster
serverComponents:
  kubernetes:
    # customize kubertenes version
    version: 1.25.14
  docker:
    # customize apt package version for docker install
    # apt-cache madison docker-ce
    version: 5:24.0.6-1~ubuntu.20.04~focal
  containerd:
    # customize apt package version for containerd install
    # apt-cache madison containerd.io
    version: 1.6.24-1
# add autoscaler chart extra values
cluster-autoscaler:
  replicaCount: 3
  resources:
    requests:
      cpu: 200m
      memory: 300Mi
# add some custom script for all nodes in cluster
preStartScript: |
  # add some custom cron job on node
  crontab <<EOF
  0 0 * * * /usr/bin/docker system prune -af
  EOF

  # add containerd config for some registries
  mkdir -p /etc/containerd/certs.d/some-registry.io
  cat > /etc/containerd/certs.d/some-registry.io/hosts.toml <<EOF
  server = "https://some-registry.io"

  [host."http://10.10.10.10:5000"]
  capabilities = ["pull", "resolve"]
  EOF
Kubernetes v1.25 in Europe
ipRange: "10.0.0.0/16"
masterCount: 3
serverComponents:
  kubernetes:
    version: 1.25.14
  docker:
    version: 5:24.0.6-1~ubuntu.20.04~focal
  containerd:
    version: 1.6.24-1
cluster-autoscaler:
  replicaCount: 3
  resources:
    requests:
      cpu: 100m
      memory: 300Mi
preStartScript: |
  # add some custom cron job on node
  crontab <<EOF
  0 0 * * * /usr/bin/docker system prune -af
  EOF

  # add containerd config for some registries
  mkdir -p /etc/containerd/certs.d/some-registry.io
  cat > /etc/containerd/certs.d/some-registry.io/hosts.toml <<EOF
  server = "https://some-registry.io"

  [host."http://10.10.10.10:5000"]
  capabilities = ["pull", "resolve"]
  EOF
Kubernetes v1.26 in Europe
ipRange: "10.0.0.0/16"
masterCount: 3
serverComponents:
  kubernetes:
    version: 1.26.9
  docker:
    version: 5:24.0.6-1~ubuntu.20.04~focal
  containerd:
    version: 1.6.24-1
Kubernetes v1.27 in Europe
ipRange: "10.0.0.0/16"
masterCount: 3
serverComponents:
  kubernetes:
    version: 1.27.6
  docker:
    version: 5:24.0.6-1~ubuntu.20.04~focal
  containerd:
    version: 1.6.24-1
Kubernetes v1.28 in Europe
ipRange: "10.0.0.0/16"
masterCount: 3
serverComponents:
  kubernetes:
    version: 1.28.2
  docker:
    version: 5:24.0.6-1~ubuntu.20.04~focal
  containerd:
    version: 1.6.24-1
Kubernetes v1.28 in US East
ipRange: "10.0.0.0/16"
masterCount: 3
networkZone: us-east
location: ash
datacenter: ash-dc1
masterServers:
  servertype: cpx21
serverComponents:
  kubernetes:
    version: 1.28.2
  docker:
    version: 5:24.0.6-1~ubuntu.20.04~focal
  containerd:
    version: 1.6.24-1
cluster-autoscaler:
  autoscalingGroups:
  - name: CPX51:ASH:cpx51-ash
    minSize: 1
    maxSize: 20
Kubernetes v1.28 in Europe (ARM64 architecture)
ipRange: "10.0.0.0/16"
masterCount: 3
serverComponents:
  ubuntu:
    architecture: arm
  kubernetes:
    version: 1.28.2
  docker:
    version: 5:24.0.6-1~ubuntu.20.04~focal
  containerd:
    version: 1.6.24-1
masterServers:
  servertype: cax11
cluster-autoscaler:
  autoscalingGroups:
  - name: CAX41:FSN1:cax-fsn1
    minSize: 1
    maxSize: 20
# create 3 instance with 1 load balancer
# kubernetes autoscaler will create 1 worker node
hcloud-k8s-ctl -action=create

all nodes in cluster initialized with official kubeadm - for all nodes use this script, for master initializing this script, for initial applications in cluster this script

Access to cluster

export KUBECONFIG=$HOME/.kube/hcloud

kubectl get no

Patch already created cluster

hcloud-k8s-ctl -action=patch-cluster

List available location/datacenter/servertype at Hezner

hcloud-k8s-ctl -action=list-configurations

Delete already created cluster

hcloud-k8s-ctl -action=delete

To install NFS provisioner

You can easy install NFS provisioner for your cluster adding to your config.yaml next lines

deployments:
  nfs:
    nfs-subdir-external-provisioner:
      enabled: true
    server:
      enabled: true

It will install NFS Provisioner for Kubernetes (optional) with NFS Server and Storage Class

you can easy create new NFS volumes to your pod with this PersistentVolumeClaim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  storageClassName: nfs