From 2937818b974bb4337fe656db39d3b1864cf4a09f Mon Sep 17 00:00:00 2001 From: github-actions Date: Sun, 12 Nov 2023 11:58:10 +0000 Subject: [PATCH] Deployed 4febcce to master with MkDocs 1.5.3 and mike 1.1.2 --- master/getting-started/installation/index.html | 2 +- master/search/search_index.json | 2 +- master/sitemap.xml.gz | Bin 127 -> 127 bytes 3 files changed, 2 insertions(+), 2 deletions(-) diff --git a/master/getting-started/installation/index.html b/master/getting-started/installation/index.html index 0a52dcfa..b5766ab9 100644 --- a/master/getting-started/installation/index.html +++ b/master/getting-started/installation/index.html @@ -34,7 +34,7 @@ -->

Verify the installation by checking the Kubitect version.

kubitect --version
 
-# kubitect version v3.2.2
+# kubitect version v3.3.0
 

Enable shell autocomplete🔗︎

Tip

To list all supported shells, run: kubitect completion -h

For shell specific instructions run: kubitect completion shell -h

This script depends on the bash-completion package. If it is not installed already, you can install it via your OS's package manager.

To load completions in your current shell session:

source <(kubitect completion bash)
 

To load completions for every new session, execute once:

Linux:

kubitect completion bash > /etc/bash_completion.d/kubitect
 

macOS:

kubitect completion bash > $(brew --prefix)/etc/bash_completion.d/kubitect
diff --git a/master/search/search_index.json b/master/search/search_index.json
index 92dfd093..09bd7893 100644
--- a/master/search/search_index.json
+++ b/master/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"examples/accessing-cluster/","title":"Accessing the cluster","text":"

Cloud providers that support Kubernetes clusters typically provide load balancer provisioning on demand. By setting a Service type to LoadBalancer, an external load balancer is automatically provisioned with its own unique IP address. This load balancer redirects all incoming connections to the Service, as illustrated in the figure below.

In on-premise environments, there is no load balancer that can be provisioned on demand. Therefore, some alternative solutions are explained in this document.

"},{"location":"examples/accessing-cluster/#accessing-the-cluster","title":"Accessing the cluster","text":""},{"location":"examples/accessing-cluster/#node-ports","title":"Node ports","text":"

Setting Service type to NodePort makes Kubernetes reserve a port on all its nodes. As a result, the Service becomes available on <NodeIP>:<NodePort>, as shown in the figure below.

When using NodePort, it does not matter to which node a client sends the request, since it is routed internally to the appropriate Pod. However, if all traffic is directed to a single node, its failure will make the Service unavailable.

"},{"location":"examples/accessing-cluster/#self-provisioned-edge","title":"Self-provisioned edge","text":"

With Kubitect, it is possible to configure the port forwarding of the load balancer to distribute incoming requests to multiple nodes in the cluster, as shown in the figure below.

To set up load balancer port forwarding, at least one load balancer must be configured. The following example shows how to set up load balancer port forwarding for ports 80 (HTTP) and 443 (HTTPS).

cluster:\n  nodes:\n    loadBalancer:\n      forwardPorts:\n        - name: http\n          port: 80\n        - name: https\n          port: 443\n      instances:\n        - id: 1\n

Load balancer port forwarding is particularly handy when combined with a NodePort Service or a Service whose ports are exposed on the host. For example, for HTTP and HTTPS traffic an Ingress is most often used. To use Ingress resources in the Kubernetes cluster, an ingress controller is required. With Kubitect, a load balancer can be configured to accept connections on ports 80 and 443, and redirect them to all cluster nodes on ports 50080 and 50443 where an ingress controller is listening for incoming requests. The following code snippet shows the configuration for such a scenario.

cluster:\n  nodes:\n    loadBalancer:\n      forwardPorts:\n        - name: http\n          port: 80\n          targetPort: 50080\n          target: workers # (1)!\n        - name: https\n          port: 443\n          targetPort: 50443\n      instances:\n        - id: 1\n\naddons:\n  kubespray:\n    ingress_nginx_enabled: true\n    ingress_nginx_namespace: \"ingress-nginx\"\n    ingress_nginx_insecure_port: 50080 # (2)!\n    ingress_nginx_secure_port: 50443\n
  1. By default, each configured port instructs the load balancer to distribute traffic across all worker nodes. The default behavior can be changed using the target property.

    Possible target values are:

    • workers - Distributes traffic across worker nodes. (default)
    • masters - Distributes traffic across master nodes.
    • all - Distributes traffic across master and worker nodes.
  2. When the ingress-nginx controller is set up with Kubespray, a DaemonSet is created that exposes ports on the host (hostPort).

"},{"location":"examples/accessing-cluster/#metallb","title":"MetalLB","text":"

MetalLB is a network load balancer implementation for bare metal Kubernetes clusters. In short, it allows you to create Services of type LoadBalancer where actual on-demand load balancers are not an option.

For MetalLB to work, a pool of unused IP addresses needs to be provided. In the following example, MetalLB is configured to use an IP address pool with the IP range 10.10.13.225/27.

addons:\n  kubespray:\n    metallb_enabled: true\n    metallb_speaker_enabled: true\n    metallb_ip_range:\n      - \"10.10.13.225/27\"\n    metallb_pool_name: \"default\"\n    metallb_auto_assign: true\n    metallb_version: v0.12.1\n    metallb_protocol: \"layer2\"\n

When a Service of type LoadBalancer is created, it is assigned an IP address from the pool. For example, we could deploy an ingress-nginx controller and change its Service type to LoadBalancer.

# Deploy ingress-nginx controller\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/baremetal/1.23/deploy.yaml\n\n# Patch ingress controller Service type to LoadBalancer\nkubectl patch svc ingress-nginx-controller -n ingress-nginx -p '{\"spec\": {\"type\":\"LoadBalancer\"}}'\n

As a result, MetalLB assigns the service ingress-nginx-controller an external IP address from the address pool.

kubectl get svc -n ingress-nginx ingress-nginx-controller\n\n# NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                      AGE\n# ingress-nginx-controller   LoadBalancer   10.233.55.194   10.10.13.225   80:31497/TCP,443:30967/TCP   63s\n

By sending a request to the assigned IP address, it can be seen that Nginx responds to the request.

curl -k https://10.10.13.225\n\n# <html>\n# <head><title>404 Not Found</title></head>\n# <body>\n# <center><h1>404 Not Found</h1></center>\n# <hr><center>nginx</center>\n# </body>\n# </html>\n

This example has demonstrated the functionality of MetalLB in layer2 mode. For more MetalLB configuration options, see the official MetalLB documentation.

"},{"location":"examples/full-example/","title":"Full example","text":"

This document contains an example of Kubitect configuration. Example covers all (or most) of the Kubitect properties. This example is meant for users that learn the fastest from an example configuration.

#\n# The 'hosts' section contains data about the physical servers on which the\n# Kubernetes cluster will be installed.\n#\n# For each host, a name and connection type must be specified. Only one host can\n# have the connection type set to 'local' or 'localhost'.\n#\n# If the host is a remote machine, the path to the SSH key file must be specified.\n# Note that connections to remote hosts support only passwordless certificates.\n#\n# The host can also be marked as default, i.e. if no specific host is specified\n# for an instance (in the cluster.nodes section), it will be installed on a\n# default host. If none of the hosts are marked as default, the first host in the\n# list is used as the default host.\n#\nhosts:\n  - name: localhost # (3)!\n    default: true # (4)!\n    connection:\n      type: local # (5)!\n  - name: remote-server-1\n    connection:\n      type: remote\n      user: myuser # (6)!\n      ip: 10.10.40.143 # (7)!\n      ssh:\n        port: 1234  # (8)!\n        verify: true # (9)!\n        keyfile: \"~/.ssh/id_rsa_server1\" # (10)!\n  - name: remote-server-2\n    connection:\n      type: remote\n      user: myuser\n      ip: 10.10.40.144\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_server2\"\n    mainResourcePoolPath: \"/var/lib/libvirt/pools/\" # (11)!\n    dataResourcePools: # (12)!\n      - name: data-pool # (13)!\n        path: \"/mnt/data/pool\" # (14)!\n      - name: backup-pool\n        path: \"/mnt/backup/pool\"\n\n#\n# The 'cluster' section of the configuration contains general data about the\n# cluster, the nodes that are part of the cluster, and the cluster's network.\n#\ncluster:\n  name: my-k8s-cluster # (15)!\n  network:\n    mode: bridge # (16)!\n    cidr: 10.10.64.0/24 # (17)!\n    gateway: 10.10.64.1 # (18)!\n    bridge: br0 # (19)!\n  nodeTemplate:\n    user: k8s\n    ssh:\n      privateKeyPath: \"~/.ssh/id_rsa_test\"\n      addToKnownHosts: true\n    os:\n      distro: ubuntu\n      networkInterface: ens3 # (20)!\n    dns: # (21)!\n      - 1.1.1.1\n      - 1.0.0.1\n    updateOnBoot: true\n  nodes:\n    loadBalancer:\n      vip: 10.10.64.200 # (22)!\n      virtualRouterId: 13 # (23)!\n      forwardPorts:\n        - name: http\n          port: 80\n        - name: https\n          port: 443\n          target: all\n        - name: sample\n          port: 60000\n          targetPort: 35000\n      default: # (24)!\n        ram: 4 # GiB\n        cpu: 1 # vCPU\n        mainDiskSize: 16 # GiB\n      instances:\n        - id: 1\n          ip: 10.10.64.5 # (25)!\n          mac: \"52:54:00:00:00:40\" # (26)!\n          ram: 8 # (27)!\n          cpu: 8 # (28)!\n          host: remote-server-1 # (29)!\n        - id: 2\n          ip: 10.10.64.6\n          mac: \"52:54:00:00:00:41\"\n          host: remote-server-2\n        - id: 3\n          ip: 10.10.64.7\n          mac: \"52:54:00:00:00:42\"\n          # If host is not specifed, VM will be installed on the default host.\n          # If default host is not specified, VM will be installed on the first\n          # host in the list.\n    master:\n      default:\n        ram: 8\n        cpu: 2\n        mainDiskSize: 256\n      instances:\n          # IMPORTANT: There should be odd number of master nodes.\n        - id: 1\n          host: remote-server-1\n        - id: 2\n          host: remote-server-2\n        - id: 3\n          host: localhost\n    worker:\n      default:\n        ram: 16\n        cpu: 4\n        labels: # (30)!\n          custom-label: \"This is a custom default node label\"\n          node-role.kubernetes.io/node: # (31)!\n      instances:\n        - id: 1\n          ip: 10.10.64.101\n          cpu: 8\n          ram: 64\n          host: remote-server-1\n        - id: 2\n          ip: 10.10.64.102\n          dataDisks: # (32)!\n            - name: rook-disk # (33)!\n              pool: data-pool # (34)!\n              size: 128 # GiB\n            - name: test-disk\n              pool: data-pool\n              size: 128\n        - id: 3\n          ip: 10.10.64.103\n          ram: 64\n          labels:\n            custom-label: \"Overwrite default node label\" # (35)!\n            instance-label: \"Node label, only for this instance\"\n        - id: 4\n          host: remote-server-2\n        - id: 5\n\n#\n# The 'kubernetes' section contains Kubernetes related properties,\n# such as version and network plugin.\n#\nkubernetes:\n  version: v1.27.5\n  networkPlugin: calico\n  dnsMode: coredns # (36)!\n  other:\n    copyKubeconfig: false\n\n#\n# The 'addons' section contains the configuration of the applications that\n# will be installed on the Kubernetes cluster as part of the cluster setup.\n#\naddons:\n  kubespray:\n    # Sample Nginx ingress controller deployment\n    ingress_nginx_enabled: true\n    ingress_nginx_namespace: \"ingress-nginx\"\n    ingress_nginx_insecure_port: 80\n    ingress_nginx_secure_port: 443\n    # Sample MetalLB deployment\n    metallb_enabled: true\n    metallb_speaker_enabled: true\n    metallb_ip_range:\n      - \"10.10.9.201-10.10.9.254\"\n    metallb_pool_name: \"default\"\n    metallb_auto_assign: true\n    metallb_version: v0.12.1\n    metallb_protocol: \"layer2\"\n
  1. This allows you to set a custom URL that targets clone/fork of Kubitect project.

  2. Kubitect version.

  3. Custom host name. It is used to link instances to the specific host.

  4. Makes the host a default host. This means that if no host is specified for the node instance, the instance will be linked to the default host.

  5. Connection type can be either local or remote.

    If it is set to remote, at least the following fields must be set:

    • user
    • ip
    • ssh.keyfile
  6. Remote host user that is used to connect to the remote hypervisor. This user must be added in the libvirt group.

  7. IP address of the remote host.

  8. Overrides default SSH port (22).

  9. If true, SSH host is verified. This means that the host must be present in the known SSH hosts.

  10. Path to the passwordless SSH key used to connect to the remote host.

  11. The path to the main resource pool defines where the virtual machine disk images are stored. These disks contain the virtual machine operating system, and therefore it is recommended to install them on SSD disks.

  12. List of other data resource pools where virtual disks can be created.

  13. Custom data resource pool name. Must be unique among all data resource pools on a specific host.

  14. Path where data resource pool is created. All data disks linked to that resource pool will be created under this path.

  15. Cluster name used as a prefix for the various components.

  16. Network mode. Possible values are

    • bridge mode uses predefined bridge interface. This mode is mandatory for deployments across multiple hosts.
    • nat mode creates virtual network with IP range defined in network.cidr
    • route
  17. Network CIDR represents the network IP together with the network mask. In nat mode, CIDR is used for the new network. In bridge mode, CIDR represents the current local area network (LAN).

  18. The network gateway IP address. If omitted the first client IP from network CIDR is used as a gateway.

  19. Bridge represents the bridge interface on the hosts. This field is mandatory if the network mode is set to bridge. If the network mode is set to nat, this field can be omitted.

  20. Set custom DNS list for all nodes. If omitted, network gateway is also used as a DNS.

  21. Specify the network interface used by the virtual machine. In general, this option can be omitted.

    If omitted, a network interface from distro preset (/terraform/defaults.yaml) is used.

  22. Virtual (floating) IP shared between load balancers.

  23. Virtual router ID that is set in Keepalived configuration when virtual IP is used. By default it is set to 51. If multiple clusters are created it must be ensured that it is unique for each cluster.

  24. Default values apply for all virtual machines (VMs) of the same type.

  25. Static IP address of the virtual machine. If omitted DHCP lease is requested.

  26. Static MAC address. If omitted MAC address is generated.

  27. Overrides default RAM value for this node.

  28. Overrides default CPU value for this node.

  29. Name of the host where instance should be created. If omitted the default host is used.

  30. Default worker node labels.

  31. Label sets worker nodes role to node.

  32. Overrides default data disks for this node.

  33. Custom data disk name. It must be unique among all data disks for a specific instance.

  34. Resource pool name that must be defined on the host on which the instance will be deployed.

  35. Node labels defined for specific instances take precedence over default labels with the same key, so this label overrides the default label.

  36. Currently, the only DNS mode supported is CoreDNS.

"},{"location":"examples/full-example/#full-detailed-example","title":"Full (detailed) example","text":""},{"location":"examples/ha-cluster/","title":"Highly available (HA) cluster","text":"

This example demonstrates how to use Kubitect to create a highly available Kubernetes cluster that spans across five hosts. This topology offers redundancy in case of node or host failures.

The final topology of the deployed Kubernetes cluster is shown in the figure below.

"},{"location":"examples/ha-cluster/#highly-available-cluster","title":"Highly available cluster","text":""},{"location":"examples/ha-cluster/#step-1-hosts-configuration","title":"Step 1: Hosts configuration","text":"

This example involves the deployment of a Kubernetes cluster on five remote physical hosts. The local network subnet used in this setup is 10.10.0.0/20, with the gateway IP address set to 10.10.0.1. All hosts are connected to the same local network and feature a pre-configured bridge interface, named br0.

Tip

This example uses preconfigured bridges on each host to expose nodes on the local network.

Network bridge example shows how to configure a bridge interface using Netplan.

Furthermore, we have configured a user named kubitect on each host, which can be accessed through SSH using the same certificate stored on our local machine without the need for a password. The certificate is located at ~/.ssh/id_rsa_ha.

To deploy the Kubernetes cluster, each host's details must be specified in the Kubitect configuration file. In this case, the host configurations differ only in the host's name and IP address.

ha.yaml
hosts:\n  - name: host1\n    connection:\n      type: remote\n      user: kubitect\n      ip: 10.10.0.5\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_ha\"\n  - name: host2\n    connection:\n      type: remote\n      user: kubitect\n      ip: 10.10.0.6\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_ha\"\n  - name: host3\n    connection:\n      type: remote\n      user: kubitect\n      ip: 10.10.0.10\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_ha\"\n  - name: host4\n    connection:\n      type: remote\n      user: kubitect\n      ip: 10.10.0.11\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_ha\"\n  - name: host5\n    connection:\n      type: remote\n      user: kubitect\n      ip: 10.10.0.12\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_ha\"\n
"},{"location":"examples/ha-cluster/#step-2-network-configuration","title":"Step 2: Network configuration","text":"

In the network configuration section, we specify the bridge interface that is preconfigured on each host and CIDR of our local network.

The code snippet below illustrates the network configuration used in this example:

ha.yaml
cluster:\n  network:\n    mode: bridge\n    cidr: 10.10.0.0/20\n    bridge: br0\n
"},{"location":"examples/ha-cluster/#step-3-load-balancer-configuration","title":"Step 3: Load balancer configuration","text":"

By placing a load balancer in front of the control plane (as shown in the Multi-master cluster example), traffic can be distributed across all control plane nodes.

Placing a load balancer in front of the control plane, as demonstrated in the Multi-master cluster example, enables traffic distribution across all healthy control plane nodes. However, having only one load balancer in the cluster would create a single point of failure, potentially rendering the control plane inaccessible if the load balancer fails.

To prevent this scenario, it is necessary to configure at least two load balancers. One of the load balancers serves as the primary, while the other functions as a failover (backup). The purpose of the failover load balancer is to serve incoming requests using the same virtual (shared) IP address if the primary load balancer fails, as depicted in the figure below.

To achieve failover, a virtual router redundancy protocol (VRRP) is used. In practice, each load balancer has its own IP address, but the primary load balancer also serves requests on the virtual IP address, which is not bound to any network interface.

The primary load balancer sends periodic heartbeats to the backup load balancers to indicate that it is still active. If the backup load balancer does not receive a heartbeat within a specified time period, it assumes that the primary load balancer has failed. The new primary load balancer is then elected based on the available load balancers' priorities. Once the new primary load balancer is selected, it starts serving requests on the same virtual IP address as the previous primary load balancer.

The following code snippet shows the configuration of two load balancers and virtual IP for their failover. The load balancers are also configured to be deployed on different hosts for additional redundancy.

ha.yaml
cluster:\n  nodes:\n    loadBalancer:\n      vip: 10.10.13.200\n      instances:\n        - id: 1\n          ip: 10.10.13.201\n          host: host1\n        - id: 2\n          ip: 10.10.13.202\n          host: host2\n
"},{"location":"examples/ha-cluster/#step-4-nodes-configuration","title":"Step 4: Nodes configuration","text":"

The configuration of the nodes is straightforward and similar to the load balancer instance configuration. Each node instance is configured with an ID, an IP address, and a host affinity.

ha.yaml
cluster:\n  nodes:\n    master:\n      instances:\n        - id: 1\n          ip: 10.10.13.10\n          host: host3\n        - id: 2\n          ip: 10.10.13.11\n          host: host4\n        - id: 3\n          ip: 10.10.13.12\n          host: host5\n    worker:\n      instances:\n        - id: 1\n          ip: 10.10.13.20\n          host: host3\n        - id: 2\n          ip: 10.10.13.21\n          host: host4\n        - id: 3\n          ip: 10.10.13.22\n          host: host5\n
"},{"location":"examples/ha-cluster/#step-41-optional-data-disks-configuration","title":"Step 4.1 (Optional): Data disks configuration","text":"

Kubitect automatically creates a main (system) disk for each configured node. Main disk contains the operating system and installed Kubernetes components.

Additional disks, also known as data disks, can be created to expand the node's storage capacity. This feature is particularly useful when using storage solutions like Rook, which can utilize empty disks to provide reliable distributed storage.

Data disks in Kubitect must be configured separately for each node instance. They must also be connected to a resource pool, which can be either a main resource pool or a custom data resource pool. In this example, we have defined a custom data resource pool named data-pool on each host running worker nodes.

Configuring data disks in Kubitect requires a separate configuration for each node instance, with each disk connected to a resource pool. The resource pool can be either a main resource pool or a custom data resource pool. In this example, we have defined a custom data resource pool named data-pool on each host that runs worker nodes.

ha.yaml
hosts:\n  - name: host3\n    ...\n    dataResourcePools:\n      - name: data-pool\n        path: /mnt/libvirt/pools/\n\ncluster:\n  nodes:\n    worker:\n      - id: 1\n        ...\n        host: host3\n        dataDisks:\n          - name: rook\n            pool: data-pool\n            size: 512 # GiB\n
Final cluster configuration ha.yaml
hosts:\n  - name: host1\n    connection:\n      type: remote\n      user: kubitect\n      ip: 10.10.0.5\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_ha\"\n  - name: host2\n    connection:\n      type: remote\n      user: kubitect\n      ip: 10.10.0.6\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_ha\"\n  - name: host3\n    connection:\n      type: remote\n      user: kubitect\n      ip: 10.10.0.10\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_ha\"\n    dataResourcePools:\n      - name: data-pool\n        path: /mnt/libvirt/pools/\n  - name: host4\n    connection:\n      type: remote\n      user: kubitect\n      ip: 10.10.0.11\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_ha\"\n    dataResourcePools:\n      - name: data-pool\n        path: /mnt/libvirt/pools/\n  - name: host5\n    connection:\n      type: remote\n      user: kubitect\n      ip: 10.10.0.12\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_ha\"\n    dataResourcePools:\n      - name: data-pool\n        path: /mnt/libvirt/pools/\n\ncluster:\n  name: kubitect-ha\n  network:\n    mode: bridge\n    cidr: 10.10.0.0/20\n    bridge: br0\n  nodeTemplate:\n    user: k8s\n    updateOnBoot: true\n    ssh:\n      addToKnownHosts: true\n    os:\n      distro: ubuntu\n  nodes:\n    loadBalancer:\n      vip: 10.10.13.200\n      instances:\n        - id: 1\n          ip: 10.10.13.201\n          host: host1\n        - id: 2\n          ip: 10.10.13.202\n          host: host2\n    master:\n      instances:\n        - id: 1\n          ip: 10.10.13.10\n          host: host3\n        - id: 2\n          ip: 10.10.13.11\n          host: host4\n        - id: 3\n          ip: 10.10.13.12\n          host: host5\n    worker:\n      instances:\n        - id: 1\n          ip: 10.10.13.20\n          host: host3\n          dataDisks:\n            - name: rook\n              pool: data-pool\n              size: 512\n        - id: 2\n          ip: 10.10.13.21\n          host: host4\n          dataDisks:\n            - name: rook\n              pool: data-pool\n              size: 512\n        - id: 3\n          ip: 10.10.13.22\n          host: host5\n          dataDisks:\n            - name: rook\n              pool: data-pool\n              size: 512\n\nkubernetes:\n  version: v1.27.5\n
"},{"location":"examples/ha-cluster/#step-5-applying-the-configuration","title":"Step 5: Applying the configuration","text":"

To deploy a cluster, apply the configuration file:

kubitect apply --config ha.yaml\n
"},{"location":"examples/multi-master-cluster/","title":"Multi-master cluster","text":"

This example demonstrates how to use Kubitect to set up a Kubernetes cluster with 3 master and 3 worker nodes.

By configuring multiple master nodes, the control plane remains to operate normally even if some master nodes fail. Since Kubitect deploys clusters with a stacked control plane, the redundancy is ensured as long as there are at least (n/2)+1 master nodes available.

The final topology of the deployed Kubernetes cluster is depicted in the figure below.

Note

This example skips the explanation of some common configurations such as hosts, network, and node template, as they are already covered in detail in the Getting started (step-by-step) guide.

Preset available

To export the preset configuration, run: kubitect export preset example-multi-master

"},{"location":"examples/multi-master-cluster/#multi-master-cluster","title":"Multi-master cluster","text":""},{"location":"examples/multi-master-cluster/#step-1-cluster-configuration","title":"Step 1: Cluster configuration","text":"

When deploying a multiple master Kubernetes cluster using Kubitect, it is necessary to configure at least one load balancer. The load balancer is responsible for distributing traffic evenly across the control plane nodes. In the event of a particular master node failure, the load balancer automatically detects the unhealthy node and routes traffic only to the remaining healthy nodes, ensuring the continuous availability of the Kubernetes cluster.

The figure below provides a visual representation of this approach.

To create such a cluster, all we need to do is specify the desired node instances and configure one load balancer. The control plane will be accessible through the load balancer's IP address.

multi-master.yaml
cluster:\n  ...\n  nodes:\n    loadBalancer:\n      instances:\n        - id: 1\n          ip: 192.168.113.100\n    master:\n      instances: # (1)!\n        - id: 1\n          ip: 192.168.113.10\n        - id: 2\n          ip: 192.168.113.11\n        - id: 3\n          ip: 192.168.113.12\n    worker:\n      instances:\n        - id: 1\n          ip: 192.168.113.20\n        - id: 2\n          ip: 192.168.113.21\n        - id: 3\n          ip: 192.168.113.22\n
  1. Size of the control plane (number of master nodes) must be odd.

Kubitect automatically detects the load balancer instance in the configuration file and installs the HAProxy load balancer on an additional virtual machine. The load balancer is then configured to distribute traffic received on port 6443, which is the Kubernetes API server port, to all control plane nodes.

Final cluster configuration multi-master.yaml
hosts:\n  - name: localhost\n    connection:\n      type: local\n\ncluster:\n  name: k8s-cluster\n  network:\n    mode: nat\n    cidr: 192.168.113.0/24\n  nodeTemplate:\n    user: k8s\n    updateOnBoot: true\n    ssh:\n      addToKnownHosts: true\n    os:\n      distro: ubuntu\n  nodes:\n    loadBalancer:\n      instances:\n        - id: 1\n          ip: 192.168.113.100\n    master:\n      instances:\n        - id: 1\n          ip: 192.168.113.10\n        - id: 2\n          ip: 192.168.113.11\n        - id: 3\n          ip: 192.168.113.12\n    worker:\n      instances:\n        - id: 1\n          ip: 192.168.113.20\n        - id: 2\n          ip: 192.168.113.21\n        - id: 3\n          ip: 192.168.113.22\n\nkubernetes:\n  version: v1.27.5\n  networkPlugin: calico\n
"},{"location":"examples/multi-master-cluster/#step-2-applying-the-configuration","title":"Step 2: Applying the configuration","text":"

To deploy a cluster, apply the configuration file:

kubitect apply --config multi-master.yaml\n
"},{"location":"examples/multi-worker-cluster/","title":"Multi-worker cluster","text":"

This example demonstrates how to use Kubitect to set up a Kubernetes cluster consisting of one master and three worker nodes. The final topology of the deployed Kubernetes cluster is shown in the figure below.

Note

This example skips the explanation of some common configurations such as hosts, network, and node template, as they are already covered in detail in the Getting started (step-by-step) guide.

Preset available

To export the preset configuration, run: kubitect export preset example-multi-worker

"},{"location":"examples/multi-worker-cluster/#multi-worker-cluster","title":"Multi-worker cluster","text":""},{"location":"examples/multi-worker-cluster/#step-1-cluster-configuration","title":"Step 1: Cluster configuration","text":"

You can easily create a cluster with multiple worker nodes by specifying them in the configuration file. For this example, we have included three worker nodes, but you can add as many as you like to suit your needs.

multi-worker.yaml
cluster:\n  ...\n  nodes:\n    master:\n      instances:\n        - id: 1\n          ip: 192.168.113.10 # (1)!\n    worker:\n      instances:\n        - id: 1\n          ip: 192.168.113.21\n        - id: 7\n          ip: 192.168.113.27\n        - id: 99\n
  1. Static IP address of the node. If the ip property is omitted, the DHCP lease is requested when the cluster is created.
Final cluster configuration multi-worker.yaml
hosts:\n  - name: localhost\n    connection:\n      type: local\n\ncluster:\n  name: k8s-cluster\n  network:\n    mode: nat\n    cidr: 192.168.113.0/24\n  nodeTemplate:\n    user: k8s\n    updateOnBoot: true\n    ssh:\n      addToKnownHosts: true\n    os:\n      distro: ubuntu\n  nodes:\n    master:\n      instances:\n        - id: 1\n          ip: 192.168.113.10\n    worker:\n      instances:\n        - id: 1\n          ip: 192.168.113.21\n        - id: 7\n          ip: 192.168.113.27\n        - id: 99\n\nkubernetes:\n  version: v1.27.5\n  networkPlugin: calico\n
"},{"location":"examples/multi-worker-cluster/#step-2-applying-the-configuration","title":"Step 2: Applying the configuration","text":"

To deploy a cluster, apply the configuration file:

kubitect apply --config multi-worker.yaml\n
"},{"location":"examples/network-bridge/","title":"Network bridge","text":"

Bridged networks allow virtual machines to connect directly to the LAN. To use Kubitect with bridged network mode, a bridge interface must be preconfigured on the host machine. This example shows how to configure a simple bridge interface using Netplan.

"},{"location":"examples/network-bridge/#network-bridge","title":"Network bridge","text":""},{"location":"examples/network-bridge/#step-1-preconfigure-the-bridge-on-the-host","title":"Step 1 - (Pre)configure the bridge on the host","text":"

Before the network bridge can be created, a name of the host's network interface is required. This interface will be used by the bridge.

To print the available network interfaces of the host, use the following command.

nmcli device | grep ethernet\n

Similarly to the previous command, network interfaces can be printed using ifconfig or ip commands. Note that these commands output all interfaces, including virtual ones.

ifconfig -a\n# or\nip a\n

Once you have obtained the name of the host's network interface (in our case eth0), you can create a bridge interface (in our case br0) by creating a file with the following content: /etc/netplan/bridge0.yaml

network:\n  version: 2\n  renderer: networkd\n  ethernets:\n    eth0: {} # (1)!\n  bridges:\n    br0: # (2)!\n      interfaces:\n        - eth0\n      dhcp4: true\n      dhcp6: false\n      addresses: # (3)!\n        - 10.10.0.17\n

  1. Existing host's ethernet interface to be enslaved.

  2. Custom name of the bridge interface.

  3. Optionally a static IP address can be set for the bridge interface.

Tip

See the official Netplan configuration examples for more advance configurations.

Validate if the configuration is correctly parsed by Netplan.

sudo netplan generate\n

Apply the configuration.

sudo netplan apply\n

"},{"location":"examples/network-bridge/#step-2-disable-netfilter-on-the-host","title":"Step 2 - Disable netfilter on the host","text":"

The final step is to prevent packets traversing the bridge from being sent to iptables for processing.

 cat >> /etc/sysctl.conf <<EOF\n net.bridge.bridge-nf-call-ip6tables = 0\n net.bridge.bridge-nf-call-iptables = 0\n net.bridge.bridge-nf-call-arptables = 0\n EOF\n\n sysctl -p /etc/sysctl.conf\n

Tip

For more information, see the libvirt documentation.

"},{"location":"examples/network-bridge/#step-3-set-up-a-cluster-over-bridged-network","title":"Step 3 - Set up a cluster over bridged network","text":"

In the cluster configuration file, set the following variables:

  • cluster.network.mode to bridge,
  • cluster.network.cidr to the network CIDR of the LAN and
  • cluster.network.bridge to the name of the bridge you have created (br0 in our case)
cluster:\n  network:\n    mode: bridge\n    cidr: 10.10.13.0/24\n    bridge: br0\n...\n
"},{"location":"examples/rook-cluster/","title":"Rook cluster","text":"

This example shows how to use Kubitect to set up distributed storage with Rook. For distributed storage, we add an additional data disk to each virtual machine as shown on the figure below.

This example demonstrates how to set up distributed storage with Rook. To achieve distributed storage, we add an additional data disk to each virtual machine, as depicted in the figure below. This additional data disk is utilized by Rook to provide reliable and scalable distributed storage solutions for the Kubernetes cluster.

"},{"location":"examples/rook-cluster/#rook-cluster","title":"Rook cluster","text":""},{"location":"examples/rook-cluster/#basic-setup","title":"Basic setup","text":""},{"location":"examples/rook-cluster/#step-1-define-data-resource-pool","title":"Step 1: Define data resource pool","text":"

To configure distributed storage with Rook, the data disks must be attached to the virtual machines. By default, each data disk is created in the main resource pool. However, it is also possible to configure additional resource pools and associate data disks with them later, depending on your requirements.

In this example, we define an additional resource pool named rook-pool. rook-sample.yaml

hosts:\n  - name: localhost\n    connection:\n      type: local\n    dataResourcePools:\n      - name: rook-pool\n

"},{"location":"examples/rook-cluster/#step-2-attach-data-disks","title":"Step 2: Attach data disks","text":"

After the data resource pool is configured, we are ready to allocate some data disks to the virtual machines.

rook-sample.yaml
cluster:\n  nodes:\n    worker:\n      instances:\n        - id: 1\n          dataDisks:\n            - name: rook\n              pool: rook-pool # (1)!\n              size: 256\n        - id: 2\n          dataDisks:\n            - name: rook\n              pool: rook-pool\n              size: 256\n        - id: 3\n        - id: 4\n          dataDisks:\n            - name: rook\n              pool: rook-pool\n              size: 256\n            - name: test\n              pool: rook-pool\n              size: 32\n
  1. To create data disks in the main resource pool, either omit the pool property or set its value to main.
"},{"location":"examples/rook-cluster/#step-3-enable-rook-addon","title":"Step 3: Enable Rook addon","text":"

After configuring the disks and attaching them to the virtual machines, activating the Rook add-on is all that is required to utilize the distributed storage solution.

rook-sample.yaml
addons:\n  rook:\n    enabled: true\n

By default, Rook resources are provisioned on all worker nodes in the Kubernetes cluster, without any constraints. However, this behavior can be restricted using node selectors, which are explained later in the guide.

Final cluster configuration rook-sample.yaml
hosts:\n  - name: localhost\n    connection:\n      type: local\n    dataResourcePools:\n      - name: rook-pool\n\ncluster:\n  name: rook-cluster\n  network:\n    mode: nat\n    cidr: 192.168.113.0/24\n  nodeTemplate:\n    user: k8s\n    updateOnBoot: true\n    ssh:\n      addToKnownHosts: true\n    os:\n      distro: ubuntu\n  nodes:\n    master:\n      instances:\n        - id: 1\n    worker:\n      instances:\n        - id: 1\n          dataDisks:\n            - name: rook\n              pool: rook-pool\n              size: 256\n        - id: 2\n          dataDisks:\n            - name: rook\n              pool: rook-pool\n              size: 256\n        - id: 3\n        - id: 4\n          dataDisks:\n            - name: rook\n              pool: rook-pool\n              size: 256\n            - name: test\n              pool: rook-pool\n              size: 32\n\nkubernetes:\n  version: v1.27.5\n\naddons:\n  rook:\n    enabled: true\n
"},{"location":"examples/rook-cluster/#step-4-apply-the-configuration","title":"Step 4: Apply the configuration","text":"
kubitect apply --config rook-sample.yaml\n
"},{"location":"examples/rook-cluster/#node-selector","title":"Node selector","text":"

The node selector is a dictionary of labels and their potential values. The node selector restricts on which nodes Rook can be deployed, by selecting only those nodes that match all the specified labels.

"},{"location":"examples/rook-cluster/#step-1-set-node-labels","title":"Step 1: Set node labels","text":"

To use the node selector effectively, you should give your nodes custom labels.

In this example, we label all worker nodes with the label rook. To ensure that scaling the cluster does not subsequently affect Rook, we set label's value to false by default. Only the nodes where Rook should be deployed are labeled rook: true, as shown in the figure below.

The following configuration snippet shows how to set a default label and override it for a particular instance.

rook-sample.yaml
cluster:\n  nodes:\n    worker:\n      default:\n        labels:\n          rook: false\n      instances:\n        - id: 1\n          labels:\n            rook: true # (1)!\n        - id: 2\n          labels:\n            rook: true\n        - id: 3\n          labels:\n            rook: true\n        - id: 4\n
  1. By default, the label rook: false is set for all worker nodes. Setting the label rook: true for this particular instance overrides the default label.
"},{"location":"examples/rook-cluster/#step-2-configure-a-node-selector","title":"Step 2: Configure a node selector","text":"

So far we have labeled all worker nodes, but labeling is not enough to prevent Rook from being deployed on all worker nodes. To restrict on which nodes Rook resources can be deployed, we need to configure a node selector.

We want to deploy Rook on the nodes labeled with the label rook: true, as shown in the figure below.

The following configuration snippet shows how to configure the node selector mentioned above.

rook-sample.yaml
addons:\n  rook:\n    enabled: true\n    nodeSelector:\n      rook: true\n
Final cluster configuration rook-sample.yaml
hosts:\n  - name: localhost\n    connection:\n      type: local\n    dataResourcePools:\n      - name: rook-pool\n\ncluster:\n  name: rook-cluster\n  network:\n    mode: nat\n    cidr: 192.168.113.0/24\n  nodeTemplate:\n    user: k8s\n    updateOnBoot: true\n    ssh:\n      addToKnownHosts: true\n    os:\n      distro: ubuntu\n  nodes:\n    master:\n      instances:\n        - id: 1\n    worker:\n      default:\n        labels:\n          rook: false\n      instances:\n        - id: 1\n          labels:\n            rook: true\n          dataDisks:\n            - name: rook\n              pool: rook-pool\n              size: 256\n        - id: 2\n          labels:\n            rook: true\n          dataDisks:\n            - name: rook\n              pool: rook-pool\n              size: 256\n        - id: 3\n          labels:\n            rook: true\n        - id: 4\n          dataDisks:\n            - name: rook\n              pool: rook-pool\n              size: 256\n            - name: test\n              pool: rook-pool\n              size: 32\n\nkubernetes:\n  version: v1.27.5\n\naddons:\n  rook:\n    enabled: true\n    nodeSelector:\n      rook: true\n
"},{"location":"examples/rook-cluster/#step-3-apply-the-configuration","title":"Step 3: Apply the configuration","text":"

To deploy a cluster, apply the configuration file:

kubitect apply --config rook-sample.yaml\n
"},{"location":"examples/single-node-cluster/","title":"Single node cluster","text":"

This example demonstrates how to set up a single-node Kubernetes cluster using Kubitect. In a single-node cluster, only one master node needs to be configured. The topology of the Kubernetes cluster deployed in this guide is shown below.

Note

This example skips the explanation of some common configurations such as hosts, network, and node template, as they are already covered in detail in the Getting started (step-by-step) guide.

Preset available

To export the preset configuration, run: kubitect export preset example-single-node

"},{"location":"examples/single-node-cluster/#single-node-cluster","title":"Single node cluster","text":""},{"location":"examples/single-node-cluster/#step-1-create-the-configuration","title":"Step 1: Create the configuration","text":"

To initialize a single-node Kubernetes cluster, you need to specify a single master node in the cluster configuration file.

single-node.yaml
cluster:\n  ...\n  nodes:\n    master:\n      instances:\n        - id: 1\n          ip: 192.168.113.10 # (1)!\n
  1. Static IP address of the node. If the ip property is omitted, the DHCP lease is requested when the cluster is created.

When no worker nodes are specified, master nodes are labeled as schedulable, which makes them behave as both master and worker nodes. This means that the single master node in the cluster will perform both the control plane functions of a Kubernetes master node and the data plane functions of a worker node.

Final cluster configuration single-node.yaml
hosts:\n  - name: localhost\n    connection:\n      type: local\n\ncluster:\n  name: k8s-cluster\n  network:\n    mode: nat\n    cidr: 192.168.113.0/24\n  nodeTemplate:\n    user: k8s\n    updateOnBoot: true\n    ssh:\n      addToKnownHosts: true\n    os:\n      distro: ubuntu\n  nodes:\n    master:\n      default:\n        ram: 4\n        cpu: 2\n        mainDiskSize: 32\n      instances:\n        - id: 1\n          ip: 192.168.113.10\n\nkubernetes:\n  version: v1.27.5\n  networkPlugin: calico\n
"},{"location":"examples/single-node-cluster/#step-2-applying-the-configuration","title":"Step 2: Applying the configuration","text":"

To deploy a cluster, apply the configuration file:

kubitect apply --config single-node.yaml\n
"},{"location":"getting-started/getting-started/","title":"Getting started (step-by-step)","text":"

In the quick start guide, we learned how to create a Kubernetes cluster using a preset configuration. Now, we will explore how to create a customized cluster topology that meets your specific requirements.

This step-by-step guide will walk you through the process of creating a custom cluster configuration file from scratch and using it to create a functional Kubernetes cluster with one master and one worker node. By following the steps outlined in this guide, you will have a Kubernetes cluster up and running in no time.

"},{"location":"getting-started/getting-started/#getting-started","title":"Getting Started","text":""},{"location":"getting-started/getting-started/#step-1-ensure-all-requirements-are-met","title":"Step 1 - Ensure all requirements are met","text":"

Before progressing with this guide, take a minute to ensure that all of the requirements are met. Afterwards, simply create a new YAML file and open it in a text editor of your choice.

"},{"location":"getting-started/getting-started/#step-2-prepare-hosts-configuration","title":"Step 2 - Prepare hosts configuration","text":"

In the cluster configuration file, the first step is to define hosts. Hosts represent target servers that can be either local or remote machines.

LocalhostRemote host

When setting up the cluster on your local host, where the command line tool is installed, be sure to specify a host with a connection type set to local.

kubitect.yaml
hosts:\n  - name: localhost # (1)!\n    connection:\n      type: local\n
  1. Custom unique name of the host.

In case the cluster is deployed on a remote host, you will be required to provide the IP address of the remote machine along with the SSH credentials.

kubitect.yaml
hosts:\n  - name: my-remote-host\n    connection:\n      type: remote\n      user: myuser\n      ip: 10.10.40.143 # (1)!\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_server1\" # (2)!\n
  1. IP address of the remote host.

  2. Path to the password-less SSH key file required for establishing connection with the remote host.

Throughout this guide, only localhost will be used.

"},{"location":"getting-started/getting-started/#step-3-define-cluster-infrastructure","title":"Step 3 - Define cluster infrastructure","text":"

The second part of the configuration file consists of the cluster infrastructure. In this part, all cluster nodes are defined along with their properties such as operating system, CPU cores, amount of RAM and so on.

Below is an image that visualizes the components of the final cluster.

Let's shift our attention to the following configuration:

kubitect.yaml
cluster:\n  name: k8s-cluster\n  network:\n    ...\n  nodeTemplate:\n    ...\n  nodes:\n    ...\n

As we can see, the cluster infrastructure section consists of the cluster name and three subsections:

  • cluster.name

    The cluster name is used as a prefix for each resource created by Kubitect. It's an essential property that helps identify and manage resources created by Kubitect.

  • cluster.network

    The network subsection holds information about the network properties of the cluster. It defines the IP address range, the mode of networking, and other network-specific properties that apply to the entire cluster.

  • cluster.nodeTemplate

    The node template subsection contains properties that apply to all nodes in the cluster, such as the operating system, SSH user, and SSH private key.

  • cluster.nodes

    The nodes subsection defines each node in our cluster. This subsection includes information such as the node name, node type, and other node-specific properties.

Now that we have a general idea of the cluster infrastructure configuration, let's examine each of these subsections in more detail to understand how to define them properly and configure a Kubernetes cluster using Kubitect.

"},{"location":"getting-started/getting-started/#step-31-cluster-network","title":"Step 3.1 - Cluster network","text":"

In the network subsection of the Kubernetes configuration file, we need to define the network that our cluster will use. Currently, there are two supported network modes - NAT or bridge.

The nat network mode creates a virtual network that performs network address translation. This mode allows the use of IP address ranges that do not exist within our local area network (LAN).

On the other hand, the bridge network mode uses a predefined bridge interface, allowing virtual machines to connect directly to the LAN. This mode is mandatory when the cluster spreads over multiple hosts.

For the sake of simplicity, this tutorial will use the NAT mode as it does not require a preconfigured bridge interface.

kubitect.yaml
cluster:\n  ...\n  network:\n    mode: nat\n    cidr: 192.168.113.0/24\n

The above configuration will instruct Kubitect to create a virtual network that uses 192.168.113.0/24 IP range.

"},{"location":"getting-started/getting-started/#step-32-node-template","title":"Step 3.2 - Node template","text":"

The nodeTemplate subsection allows you to define general properties for all nodes in the cluster. While there are no required fields, there are several useful properties you may want to include.

  • user

    This property specifies the name of the user that will be created on all virtual machines and used for SSH. (default: k8s)

  • os.distro

    This property defines the operating system for the nodes. By default, the nodes use the latest Ubuntu 22.04 release. To explore other available distributions, please refer to the OS Distribution section in the node template of our user guide.

  • ssh.addToKnownHosts

    When this property is set to true, all nodes will be added to SSH known hosts. If you later destroy the cluster, these nodes will also be removed from the known hosts.

  • updateOnBoot

    This property determines whether virtual machines are updated at first boot

To illustrate, let's set these nodeTemplate properties in our configuration file:

kubitect.yaml
cluster:\n  ...\n  nodeTemplate:\n    user: k8s\n    updateOnBoot: true\n    ssh:\n      addToKnownHosts: true\n    os:\n      distro: ubuntu22\n
"},{"location":"getting-started/getting-started/#step-33-cluster-nodes","title":"Step 3.3 - Cluster nodes","text":"

In the nodes subsection, we define all nodes that will form the cluster. Each node can be defined as one of the following three types:

  • worker

    A worker node runs the applications and workloads that are deployed in the cluster. It communicates with the master node to receive instructions on how to schedule and run the containers.

  • master

    Master nodes are responsible for managing and coordinating the worker nodes in the cluster. Therefore, each cluster must contain at least one master node.

    Since etcd key-value datastore is also present on these nodes, the number of master nodes must be odd. For more information, see etcd FAQ.

  • loadBalancer

    These nodes server as internal load balancers that expose the Kubernetes control plane at a single endpoint. They are essential when more then one master node is configured in the cluster.

This guide is focused on deploying a Kubernetes cluster with only one master node, which eliminates the need for internal load balancers. However, if you are interested in creating a multi-master or high-availability (HA) cluster, please refer to the corresponding examples.

To better understand this part, let's take a look at an example configuration:

kubitect.yaml
cluster:\n  ...\n  nodes:\n    master:\n      default: # (1)!\n        ram: 4 # (2)!\n        cpu: 2 # (3)!\n        mainDiskSize: 32 # (4)!\n      instances: # (5)!\n        - id: 1 # (6)!\n          ip: 192.168.113.10 # (7)!\n    worker:\n      default:\n        ram: 8\n        cpu: 2\n        mainDiskSize: 32\n      instances:\n        - id: 1\n          ip: 192.168.113.21\n          ram: 4 # (8)!\n
  1. Default properties are applied to all nodes of the same type, which in this case are the master nodes. They are particularly useful to quickly configure multiple nodes of the same type.

  2. The amount of RAM allocated to the master nodes (in GiB).

  3. The number of virtual CPUs assigned to each master node.

  4. The size of the virtual disk attached to each master node (in GiB).

  5. A list of master node instances.

  6. The instance ID is the only required field that must be specified for each instance.

  7. A static IP address set for this particular instance. If the ip property is omitted, the node requests a DHCP lease during creation.

  8. In this example, the amount of RAM allocated to the worker node instance is set to 4 GiB, which overwrites the default value of 8 GiB.

"},{"location":"getting-started/getting-started/#step-34-kubernetes-properties","title":"Step 3.4 - Kubernetes properties","text":"

The final section of the cluster configuration contains the Kubernetes properties, such as the version and network plugin.

kubitect.yaml
kubernetes:\n  version: v1.27.5\n  networkPlugin: calico\n
"},{"location":"getting-started/getting-started/#step-4-create-the-cluster","title":"Step 4 - Create the cluster","text":"

Below is the final configuration for our Kubernetes cluster:

Final cluster configuration kubitect.yaml
hosts:\n  - name: localhost\n    connection:\n      type: local\n\ncluster:\n  name: k8s-cluster\n  network:\n    mode: nat\n    cidr: 192.168.113.0/24\n  nodeTemplate:\n    user: k8s\n    updateOnBoot: true\n    ssh:\n      addToKnownHosts: true\n    os:\n      distro: ubuntu22\n  nodes:\n    master:\n      default:\n        ram: 4\n        cpu: 2\n        mainDiskSize: 32\n      instances:\n        - id: 1\n          ip: 192.168.113.10\n    worker:\n      default:\n        ram: 8\n        cpu: 2\n        mainDiskSize: 32\n      instances:\n        - id: 1\n          ip: 192.168.113.21\n          ram: 4\n\nkubernetes:\n  version: v1.27.5\n  networkPlugin: calico\n

To create the cluster, apply the configuration file to Kubitect:

kubitect apply --config kubitect.yaml\n

Tip

If you encounter any issues during the installation process, please refer to the troubleshooting page first.

After applying the configuration file to Kubitect, a directory for the created Kubernetes cluster is generated and stored in Kubitect's home directory. The default location for the home directory is ~/.kubitect and has the following structure.

~/.kubitect\n   \u251c\u2500\u2500 clusters\n   \u2502   \u251c\u2500\u2500 k8s-cluster\n   \u2502   \u251c\u2500\u2500 my-cluster\n   \u2502   \u2514\u2500\u2500 ...\n   \u2514\u2500\u2500 share\n       \u251c\u2500\u2500 terraform\n       \u2514\u2500\u2500 venv\n

The clusters directory contains a subdirectory for each Kubernetes cluster that you have created using Kubitect. Each subdirectory is named after the cluster, for example k8s-cluster. The configuration files for each cluster are stored in these directories.

The share directory contains files and directories that are shared between different cluster installations.

All created clusters can be listed at any time using the list subcommand.

kubitect list clusters\n\n# Clusters:\n#   - k8s-cluster (active)\n#   - my-cluster (active)\n

"},{"location":"getting-started/getting-started/#step-5-test-the-cluster","title":"Step 5 - Test the cluster","text":"

Once you have successfully installed a Kubernetes cluster, the Kubeconfig file can be found in the cluster's directory. However, you will most likely want to export the Kubeconfig to a separate file:

kubitect export kubeconfig --cluster k8s-cluster > kubeconfig.yaml\n

This will create a file named kubeconfig.yaml in your current directory. Finally, to confirm that the cluster is ready, you can list its nodes using the kubectl command:

kubectl get nodes --kubeconfig kubeconfig.yaml\n

Congratulations, you have completed the getting started quide.

"},{"location":"getting-started/installation/","title":"Installation","text":""},{"location":"getting-started/installation/#installation","title":"Installation","text":""},{"location":"getting-started/installation/#install-kubitect-cli-tool","title":"Install Kubitect CLI tool","text":"

Download Kubitect binary file from the release page.

curl -o kubitect.tar.gz -L https://dl.kubitect.io/linux/amd64/latest\n

Unpack tar.gz file.

tar -xzf kubitect.tar.gz\n

Install Kubitect command line tool by placing the Kubitect binary file in /usr/local/bin directory.

sudo mv kubitect /usr/local/bin/\n

Note

The download URL is a combination of the operating system type, system architecture and version of Kubitect (https://dl.kubitect.io/<os>/<arch>/<version>).

All releases can be found on GitHub release page.

Verify the installation by checking the Kubitect version.

kubitect --version\n\n# kubitect version v3.2.2\n

"},{"location":"getting-started/installation/#enable-shell-autocomplete","title":"Enable shell autocomplete","text":"

Tip

To list all supported shells, run: kubitect completion -h

For shell specific instructions run: kubitect completion shell -h

BashZsh

This script depends on the bash-completion package. If it is not installed already, you can install it via your OS's package manager.

To load completions in your current shell session:

source <(kubitect completion bash)\n

To load completions for every new session, execute once:

Linux:

kubitect completion bash > /etc/bash_completion.d/kubitect\n

macOS:

kubitect completion bash > $(brew --prefix)/etc/bash_completion.d/kubitect\n

If shell completion is not already enabled in your environment you will need to enable it. You can execute the following once:

echo \"autoload -U compinit; compinit\" >> ~/.zshrc\n

To load completions in your current shell session:

source <(kubitect completion zsh); compdef _kubitect kubitect\n

To load completions for every new session, execute once:

Linux:

kubitect completion zsh > \"${fpath[1]}/_kubitect\"\n

macOS:

kubitect completion zsh > $(brew --prefix)/share/zsh/site-functions/_kubitect\n

"},{"location":"getting-started/quick-start/","title":"Quick start","text":"

In this quick guide, we will show you how to use the Kubitect command line tool to quickly deploy a simple Kubernetes cluster.

To get started, you will need to apply a cluster configuration file to the Kubitect command line tool. You can either prepare this file manually, as explained in our Getting started guide, or use one of the available presets.

For the purposes of this quick start guide, we will be using a getting-started preset, which defines a cluster with one master and one worker node. The resulting infrastructure is shown in the image below.

"},{"location":"getting-started/quick-start/#quick-start","title":"Quick start","text":""},{"location":"getting-started/quick-start/#step-1-create-a-kubernetes-cluster","title":"Step 1 - Create a Kubernetes cluster","text":"

Export the gettings-started preset:

kubitect export preset --name getting-started > cluster.yaml\n

Then, apply the exported configuration file to the Kubitect:

kubitect apply --config cluster.yaml\n

That's it! The cluster, named k8s-cluster, should be up and running in approximately 10 minutes.

"},{"location":"getting-started/quick-start/#step-2-export-kubeconfig","title":"Step 2 - Export kubeconfig","text":"

After successfully installing the Kubernetes cluster, a Kubeconfig file will be created within the cluster's directory. To export the Kubeconfig to a custom file, use the following command:

kubitect export kubeconfig --cluster k8s-cluster > kubeconfig.yaml\n
"},{"location":"getting-started/quick-start/#step-3-test-the-cluster","title":"Step 3 - Test the cluster","text":"

To test that the cluster is up and running, display all cluster nodes using the exported Kubeconfig and the kubectl command:

kubectl get nodes --kubeconfig kubeconfig.yaml\n

Congratulations, you have successfully deployed a Kubernetes cluster using Kubitect!

"},{"location":"getting-started/requirements/","title":"Requirements","text":"

On the local host (where Kubitect command-line tool is installed), the following requirements must be met:

Git

Python >= 3.8

Python virtualenv

Password-less SSH key for each remote host

On hosts where a Kubernetes cluster will be deployed using Kubitect, the following requirements must be met:

A libvirt virtualization API

A running hypervisor that is supported by libvirt (e.g. KVM)

How to install KVM?

To install the KVM (Kernel-based Virtual Machine) hypervisor and libvirt, use apt or yum to install the following packages:

  • qemu-kvm
  • libvirt-clients
  • libvirt-daemon
  • libvirt-daemon-system

After the installation, add your user to the kvm group in order to access the kvm device:

sudo usermod -aG kvm $USER\n
"},{"location":"getting-started/requirements/#requirements","title":"Requirements","text":""},{"location":"getting-started/other/local-development/","title":"Local development","text":"

This document shows how to build a CLI tool manually and how to use the project without creating any files outside the project's directory.

"},{"location":"getting-started/other/local-development/#local-development","title":"Local development","text":""},{"location":"getting-started/other/local-development/#prerequisites","title":"Prerequisites","text":"
  • Git
  • Go 1.18 or greater
"},{"location":"getting-started/other/local-development/#step-1-clone-the-project","title":"Step 1: Clone the project","text":"

First, clone the project.

git clone https://github.com/MusicDin/kubitect\n

Afterwards, move into the cloned project.

cd kubitect\n

"},{"location":"getting-started/other/local-development/#step-2-build-kubitect-cli-tool","title":"Step 2: Build Kubitect CLI tool","text":"

The Kubitect CLI tool can be manually built using Go. Running the following command will produce a kubitect binary file.

go build .\n

To make the binary file globally accessible, move it to the /usr/local/bin/ directory.

sudo mv kubitect /usr/local/bin/kubitect\n

"},{"location":"getting-started/other/local-development/#step-3-local-development","title":"Step 3: Local development","text":"

By default, Kubitect creates and manages clusters in the Kubitect's home directory (~/.kubitect). However, for development purposes, it is often more convenient to have all resources created in the current directory.

If you want to create a new cluster in the current directory, you can use the --local flag when applying the configuration. When you create a cluster using the --local flag, its name will be prefixed with local. This prefix is added to prevent any conflicts that might arise when creating new virtual resources.

kubitect apply --local\n

The resulting cluster will be created in ./.kubitect/clusters/local-<cluster-name> directory.

"},{"location":"getting-started/other/troubleshooting/","title":"Troubleshooting","text":"

Is your issue not listed here?

If the troubleshooting page is missing an error you encountered, please report it on GitHub by opening an issue. By doing so, you will help improve the project and help others find the solution to the same problem faster.

"},{"location":"getting-started/other/troubleshooting/#troubleshooting","title":"Troubleshooting","text":""},{"location":"getting-started/other/troubleshooting/#general-errors","title":"General errors","text":""},{"location":"getting-started/other/troubleshooting/#virtualenv-not-found","title":"Virtualenv not found","text":"Error Explanation Solution

Error

Output: /bin/sh: 1: virtualenv: not found

/bin/sh: 2: ansible-playbook: not found

Explanation

The error indicates that the virtualenv is not installed.

Solution

There are many ways to install virtualenv. For all installation options you can refere to their official documentation - Virtualenv installation.

For example, virtualenv can be installed using pip.

First install pip.

sudo apt install python3-pip\n

Then install virtualenv using pip3.

pip3 install virtualenv\n

"},{"location":"getting-started/other/troubleshooting/#kvmlibvirt-errors","title":"KVM/Libvirt errors","text":""},{"location":"getting-started/other/troubleshooting/#failed-to-connect-socket-no-such-file-or-directory","title":"Failed to connect socket (No such file or directory)","text":"Error Explanation Solution

Error

Error: virError(Code=38, Domain=7, Message='Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory')

Explanation

The problem may occur when libvirt is not started.

Solution

Make sure that the libvirt service is running:

sudo systemctl status libvirtd\n

If the libvirt service is not running, start it:

sudo systemctl start libvirtd\n

Optional: Start the libvirt service automatically at boot time:

sudo systemctl enable libvirtd\n

"},{"location":"getting-started/other/troubleshooting/#failed-to-connect-socket-permission-denied","title":"Failed to connect socket (Permission denied)","text":"Error Explanation Solution

Error

Error: virError(Code=38, Domain=7, Message='Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied')

Explanation

The error indicates that either the libvirtd service is not running or the current user is not in the libvirt (or kvm) group.

Solution

If the libvirtd service is not running, start it:

sudo systemctl start libvirtd\n

Add the current user to the libvirt and kvm groups if needed:

# Add current user to groups\nsudo adduser $USER libvirt\nsudo adduser $USER kvm\n\n# Verify groups are added\nid -nG\n\n# Reload user session\n

"},{"location":"getting-started/other/troubleshooting/#error-creating-libvirt-domain","title":"Error creating libvirt domain","text":"Error Explanation Solution

Error

Error: Error creating libvirt domain: \u2026 Could not open '/tmp/terraform_libvirt_provider_images/image.qcow2': Permission denied')

Explanation

The error indicates that the file cannot be created in the specified location due to missing permissions.

  • Make sure the directory exists.
  • Make sure the directory of the file that is being denied has appropriate user permissions.
  • Optionally qemu security driver can be disabled.

Solution

Make sure the security_driver in /etc/libvirt/qemu.conf is set to none instead of selinux. This line is commented out by default, so you should uncomment it if needed:

# /etc/libvirt/qemu.conf\n\n...\nsecurity_driver = \"none\"\n...\n

Do not forget to restart the libvirt service after making the changes:

sudo systemctl restart libvirtd\n

"},{"location":"getting-started/other/troubleshooting/#libvirt-domain-already-exists","title":"Libvirt domain already exists","text":"Error Explanation Solution

Error

Error: Error defining libvirt domain: virError(Code=9, Domain=20, Message='operation failed: domain 'your-domain' already exists with uuid '...')

Explanation

The error indicates that the libvirt domain (virtual machine) already exists.

Solution

The resource you are trying to create already exists. Make sure you destroy the resource:

virsh destroy your-domain\nvirsh undefine your-domain\n

You can verify that the domain was successfully removed:

virsh dominfo --domain your-domain\n

If the domain was successfully removed, the output should look something like this:

error: failed to get domain 'your-domain'

"},{"location":"getting-started/other/troubleshooting/#libvirt-volume-already-exists","title":"Libvirt volume already exists","text":"Error Explanation Solution

Error

Error: Error creating libvirt volume: virError(Code=90, Domain=18, Message='storage volume 'your-volume.qcow2' exists already')

and / or

Error:Error creating libvirt volume for cloudinit device cloud-init.iso: virError(Code=90, Domain=18, Message='storage volume 'cloud-init.iso' exists already')

Explanation

The error indicates that the specified volume already exists.

Solution

Volumes created by Libvirt are still attached to the images, which prevents a new volume from being created with the same name. Therefore, these volumes must be removed:

virsh vol-delete cloud-init.iso --pool your_resource_pool

and / or

virsh vol-delete your-volume.qcow2 --pool your_resource_pool

"},{"location":"getting-started/other/troubleshooting/#libvirt-storage-pool-already-exists","title":"Libvirt storage pool already exists","text":"Error Explanation Solution

Error

Error: Error storage pool 'your-pool' already exists

Explanation

The error indicates that the libvirt storage pool already exists.

Solution

Remove the existing libvirt storage pool.

virsh pool-destroy your-pool && virsh pool-undefine your-pool

"},{"location":"getting-started/other/troubleshooting/#failed-to-apply-firewall-rules","title":"Failed to apply firewall rules","text":"Error Explanation Solution

Error

Error: internal error: Failed to apply firewall rules /sbin/iptables -w --table filter --insert LIBVIRT_INP --in-interface virbr2 --protocol tcp --destination-port 67 --jump ACCEPT: iptables: No chain/target/match by that name.

Explanation

Libvirt was already running when firewall (usually FirewallD) was started/installed. Therefore, libvirtd service must be restarted to detect the changes.

Solution

Restart the libvirtd service:

sudo systemctl restart libvirtd\n

"},{"location":"getting-started/other/troubleshooting/#failed-to-remove-storage-pool","title":"Failed to remove storage pool","text":"Error Explanation Solution

Error

Error: error deleting storage pool: failed to remove pool '/var/lib/libvirt/images/k8s-cluster-main-resource-pool': Directory not empty

Explanation

The pool cannot be deleted because there are still some volumes in the pool. Therefore, the volumes should be removed before the pool can be deleted.

Solution

  1. Make sure the pool is running.

    virsh pool-start --pool k8s-cluster-main-resource-pool\n

  2. List volumes in the pool.

    virsh vol-list --pool k8s-cluster-main-resource-pool\n\n#  Name         Path\n# -------------------------------------------------------------------------------------\n#  base_volume  /var/lib/libvirt/images/k8s-cluster-main-resource-pool/base_volume\n

  3. Delete listed volumes from the pool.

    virsh vol-delete --pool k8s-cluster-main-resource-pool --vol base_volume\n

  4. Destroy and undefine the pool.

    virsh pool-destroy --pool k8s-cluster-main-resource-pool\nvirsh pool-undefine --pool k8s-cluster-main-resource-pool\n

"},{"location":"getting-started/other/troubleshooting/#haproxy-load-balancer-errors","title":"HAProxy load balancer errors","text":""},{"location":"getting-started/other/troubleshooting/#random-haproxy-503-bad-gateway","title":"Random HAProxy (503) bad gateway","text":"Error Explanation Solution

Error

HAProxy returns a random HTTP 503 (Bad gateway) error.

Explanation

More than one HAProxy processes are listening on the same port.

Solution 1

For example, if an error is thrown when accessing port 80, check which processes are listening on port 80 on the load balancer VM:

netstat -lnput | grep 80\n\n# Proto Recv-Q Send-Q Local Address           Foreign Address   State       PID/Program name\n# tcp        0      0 192.168.113.200:80      0.0.0.0:*         LISTEN      1976/haproxy\n# tcp        0      0 192.168.113.200:80      0.0.0.0:*         LISTEN      1897/haproxy\n

If you see more than one process, kill the unnecessary process:

kill 1976\n

Note: You can kill all HAProxy processes and only one will be automatically recreated.

Solution 2

Check the HAProxy configuration file (config/haproxy/haproxy.cfg) that it does not contain 2 frontends bound to the same port.

"},{"location":"user-guide/before-you-begin/","title":"Before you begin","text":"

The user guide is divided into three subsections: Cluster Management, Configuration and Reference. The Cluster Management subsection introduces the operations that can be performed over the cluster. The Configuration subsection contains explanations of the configurable Kubitect properties. Finally, the Reference subsection contains a configuration and CLI reference.

The following symbol conventions are used throughout the user guide:

  • - Indicates the Kubitect version in which the property was either added or last modified.
  • - Indicates that the property is required in every valid configuration.
  • - Indicates the default value of the property.
  • - Indicates that the feature or property is experimental (not yet stable). This means that its implementation may change drastically over time and that its activation may lead to unexpected behavior.
"},{"location":"user-guide/before-you-begin/#before-you-begin","title":"Before you begin","text":""},{"location":"user-guide/configuration/addons/","title":"Addons","text":""},{"location":"user-guide/configuration/addons/#addons","title":"Addons","text":""},{"location":"user-guide/configuration/addons/#configuration","title":"Configuration","text":""},{"location":"user-guide/configuration/addons/#kubespray-addons","title":"Kubespray addons","text":"

v2.1.0

Kubespray provides a variety of configurable addons to enhance the functionality of Kubernetes. Some popular addons include the Ingress-NGINX controller and MetalLB.

Kubespray addons can be configured under the addons.kubespray property. It's important to note that the Kubespray addons are configured in the same as they would be for Kubespray itself, as Kubitect copies the provided configuration into Kubespray's group variables during cluster creation.

The full range of available addons can be explored in the Kubespray addons sample, which is available on GitHub. Most addons are also documented in the official Kubespray documentation.

addons:\n  kubespray:\n\n    # Nginx ingress controller deployment\n    ingress_nginx_enabled: true\n    ingress_nginx_namespace: \"ingress-nginx\"\n    ingress_nginx_insecure_port: 80\n    ingress_nginx_secure_port: 443\n\n    # MetalLB deployment\n    metallb_enabled: true\n    metallb_speaker_enabled: true\n    metallb_ip_range:\n      - \"10.10.9.201-10.10.9.254\"\n    metallb_pool_name: \"default\"\n    metallb_auto_assign: true\n    metallb_version: v0.12.1\n    metallb_protocol: \"layer2\"\n
"},{"location":"user-guide/configuration/addons/#rook-addon","title":"Rook addon","text":"

v2.2.0 Experimental

Rook is an orchestration tool that integrates Ceph with Kubernetes. Ceph is a highly reliable and scalable storage solution, and Rook simplifies its management by automating the deployment, scaling and management of Ceph clusters.

To enable Rook in Kubitect, set addons.rook.enabled property to true.

addons:\n  rook:\n    enabled: true\n

Note that Rook is deployed only on worker nodes. When a cluster is created without worker nodes, Kubitect attempts to install Rook on the master nodes. In addition to enabling the Rook addon, at least one data disk must be attached to a node suitable for Rook deployment. If Kubitect determines that no data disks are available for Rook, it will skip installing Rook.

"},{"location":"user-guide/configuration/addons/#node-selector","title":"Node selector","text":"

The node selector is a dictionary of node labels used to determine which nodes are eligible for Rook deployment. If a node does not match all of the specified node labels, Rook resources cannot be deployed on that node and disks attached to that node are not used for distributed storage.

addons:\n  rook:\n    nodeSelector:\n      rook: true\n
"},{"location":"user-guide/configuration/addons/#version","title":"Version","text":"

By default, Kubitect uses the latest (master) version of Rook. If you want to use a specific version of Rook, you can set the addons.rook.version property to the desired version.

addons:\n  rook:\n    version: v1.11.3\n
"},{"location":"user-guide/configuration/cluster-name/","title":"Cluster name","text":""},{"location":"user-guide/configuration/cluster-name/#cluster-metadata","title":"Cluster metadata","text":""},{"location":"user-guide/configuration/cluster-name/#configuration","title":"Configuration","text":""},{"location":"user-guide/configuration/cluster-name/#cluster-name","title":"Cluster name","text":"

v2.0.0 Required

The cluster name must be defined in the Kubitect configuration, as it acts as a prefix for all cluster resources.

cluster:\n  name: my-cluster\n

For instance, each virtual machine name is generated as <cluster.name>-<node.type>-<node.instance.id>. Therefore, the name of the virtual machine for the worker node with ID 1 would be my-cluster-master-1.

Note

Cluster name cannot contain prefix local, as it is reserved for local clusters (created with --local flag).

"},{"location":"user-guide/configuration/cluster-network/","title":"Cluster network","text":"

Network section of the Kubitect configuration file defines the properties of the network to be created or the network to which the cluster nodes are to be assigned.

"},{"location":"user-guide/configuration/cluster-network/#cluster-network","title":"Cluster network","text":""},{"location":"user-guide/configuration/cluster-network/#configuration","title":"Configuration","text":""},{"location":"user-guide/configuration/cluster-network/#network-mode","title":"Network mode","text":"

v2.0.0 Required

Kubitect supports two network modes: NAT and bridge.

cluster:\n  network:\n    mode: nat\n
"},{"location":"user-guide/configuration/cluster-network/#nat-mode","title":"NAT mode","text":"

In NAT (Network Address Translation) mode, the libvirt virtual network is created for the cluster, which reduces the need for manual configurations. However, it's limited to a single host, i.e., a single physical server.

"},{"location":"user-guide/configuration/cluster-network/#bridge-mode","title":"Bridge mode","text":"

In bridge mode, a real host network device is shared with the virtual machines, allowing each virtual machine to bind to any available IP address on the local network, just like a physical computer. This approach makes the virtual machine visible on the network, enabling the creation of clusters across multiple physical servers.

To use bridged networks, you need to preconfigure the bridge interface on each target host. This is necessary because each environment is unique. For instance, you might use link aggregation (also known as link bonding or teaming), which cannot be detected automatically and therefore requires manual configuration. The Network bridge example provides instructions on how to create a bridge interface with netplan and configure Kubitect to use it.

"},{"location":"user-guide/configuration/cluster-network/#network-cidr","title":"Network CIDR","text":"

v2.0.0 Required

The network CIDR (Classless Inter-Domain Routing) represents the network in the form of <network_ip>/<network_prefix_bits>. All IP addresses specified in the cluster section of the configuration must be within this network range, including the network gateway, node instances, floating IP of the load balancer, and so on.

In NAT network mode, the network CIDR defines an unused private network that is created. In bridge mode, the network CIDR should specify the network to which the cluster belongs.

cluster:\n  network:\n    cidr: 192.168.113.0/24 # (1)!\n
  1. In nat mode - Any unused private network within a local network.

    In bridge mode - A network to which the cluster belongs.

"},{"location":"user-guide/configuration/cluster-network/#network-gateway","title":"Network gateway","text":"

v2.0.0

The network gateway, also known as the default gateway, represents the IP address of the router. By default, it doesn't need to be specified, as the first client IP in the network range is used as the gateway address. However, if the gateway IP differs from this, it must be specified manually.

cluster:\n  network:\n    cidr: 10.10.0.0/20\n    gateway: 10.10.0.230 # (1)!\n
  1. If this option is omitted, 10.10.0.1 is used as the gateway IP (first client IP in the network range).
"},{"location":"user-guide/configuration/cluster-network/#network-bridge","title":"Network bridge","text":"

v2.0.0 Default: virbr0

The network bridge determines the bridge interface that virtual machines connect to.

In NAT network mode, a virtual network bridge interface is created on the host. These bridges are usually prefixed with vir, such as virbr44. If you omit this option, the virtual bridge name is automatically determined by libvirt. Alternatively, you can specify the name to be used for the virtual bridge.

In bridge network mode, the network bridge should be the name of the preconfigured bridge interface, such as br0.

cluster:\n  network:\n    bridge: br0\n
"},{"location":"user-guide/configuration/cluster-network/#example-usage","title":"Example usage","text":""},{"location":"user-guide/configuration/cluster-network/#virtual-nat-network","title":"Virtual NAT network","text":"

If the cluster is created on a single host, you can use the NAT network mode. In this case, you only need to specify the CIDR of the new network in addition to the network mode.

cluster:\n  network:\n    mode: nat\n    cidr: 192.168.113.0/24\n
"},{"location":"user-guide/configuration/cluster-network/#bridged-network","title":"Bridged network","text":"

To make the cluster nodes visible on the local network as physical machines or to create the cluster across multiple hosts, you must use bridge network mode. Additionally, you need to specify the network CIDR of an existing network along with the preconfigured host bridge interface.

cluster:\n  network:\n    mode: bridge\n    cidr: 10.10.64.0/24\n    bridge: br0\n
"},{"location":"user-guide/configuration/cluster-node-template/","title":"Cluster node template","text":"

The node template section of the cluster configuration defines the properties of all nodes in the cluster. This includes the properties of the operating system (OS), DNS, and the virtual machine user.

"},{"location":"user-guide/configuration/cluster-node-template/#cluster-node-template","title":"Cluster node template","text":""},{"location":"user-guide/configuration/cluster-node-template/#configuration","title":"Configuration","text":""},{"location":"user-guide/configuration/cluster-node-template/#virtual-machine-user","title":"Virtual machine user","text":"

v2.0.0 Default: k8s

The user property defines the name of the user created on each virtual machine. This user is used to access the virtual machines during cluster configuration. If you omit the user property, a user named k8s is created on all virtual machines. You can also use this user later to access each virtual machine via SSH.

cluster:\n  nodeTemplate:\n    user: kubitect\n
"},{"location":"user-guide/configuration/cluster-node-template/#operating-system-os","title":"Operating system (OS)","text":""},{"location":"user-guide/configuration/cluster-node-template/#os-distribution","title":"OS distribution","text":"

v2.1.0 Default: ubuntu

The operating system for virtual machines can be specified in the node template. By default, the Ubuntu distribution is installed on all virtual machines.

You can select a desired distribution by setting the os.distro property.

cluster:\n  nodeTemplate:\n    os:\n      distro: debian # (1)!\n
  1. By default, ubuntu is used.

The available operating system distribution presets are:

  • ubuntu - Latest Ubuntu 22.04 release. (default)
  • ubuntu22 - Ubuntu 22.04 release as of 2023-10-26.
  • ubuntu20 - Ubuntu 20.04 release as of 2023-10-11.
  • debian - Latest Debian 11 release.
  • debian11 - Debian 11 release as of 2023-10-13.
  • rocky - Latest Rocky 9 release.
  • rocky9 - Rocky 9.2 release as of 2023-05-13.
  • centos - Latest CentOS Stream 9 release.
  • centos9 - CentOS Stream 9 release as of 2023-10-23.

Important

Rocky Linux and CentOS Stream both require the x86-64-v2 instruction set to run. If the CPU mode property is not set to host-passthrough, host-model, or maximum, the virtual machine may not be able to boot properly.

Known issues

CentOS Stream images already include the qemu-guest-agent package, which reports IP addresses of the virtual machines before they are leased from a DHCP server. This can cause issues during infrastructure provisioning if the virtual machines are not configured with static IP addresses.

Where are images downloaded from?

Images are sourced from the official cloud image repository for the corresponding Linux distribution.

  • Ubuntu: Ubuntu cloud image repository
  • Debian: Debian cloud image repository
  • CentOS: CentOS cloud image repositroy
  • Rocky: Rocky cloud image repositroy
"},{"location":"user-guide/configuration/cluster-node-template/#os-source","title":"OS source","text":"

v2.1.0

If the presets do not meet your needs, you can use a custom Ubuntu or Debian image by specifying the image source. The source of an image can be either a local path on your system or a URL pointing to the image download.

cluster:\n  nodeTemplate:\n    os:\n      distro: ubuntu\n      source: https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img\n
"},{"location":"user-guide/configuration/cluster-node-template/#network-interface","title":"Network interface","text":"

v2.1.0

Generally, this setting does not have to be set, as Kubitect will correctly evaluate the network interface name to be used on each virtual machine.

If you want to instruct Kubitect to use a specific network interface on the virtual machine, you can set its name using the os.networkInterface property.

cluster:\n  nodeTemplate:\n    os:\n      networkInterface: ens3\n
"},{"location":"user-guide/configuration/cluster-node-template/#custom-dns-list","title":"Custom DNS list","text":"

v2.1.0

The configuration of Domain Name Servers (DNS) in the node template allows for customizing the DNS resolution of all virtual machines in the cluster. By default, the DNS list contains only the network gateway.

To add custom DNS servers, specify them using the dns property in the node template.

cluster:\n  nodeTemplate:\n    dns: # (1)!\n      - 1.1.1.1\n      - 1.0.0.1\n
  1. IP addresses 1.1.1.1 and 1.0.0.1 represent CloudFlare's primary and secondary public DNS resolvers, respectively.
"},{"location":"user-guide/configuration/cluster-node-template/#cpu-mode","title":"CPU mode","text":"

v2.2.0 Default: custom

The cpuMode property in the node template can be used to configure a guest CPU to closely resemble the host CPU.

cluster:\n  nodeTemplate:\n    cpuMode: host-passthrough\n

Currently, there are several CPU modes available:

  • custom (default)
  • host-model
  • host-passthrough
  • maximum

In short, the host-model mode uses the same CPU model as the host, while the host-passthrough mode provides full CPU feature set to the guest virtual machine, but may impact its live migration. The maximum mode selects the CPU with the most available features. For a more detailed explanation of the available CPU modes and their usage, please refer to the libvirt documentation.

Tip

The host-model and host-passthrough modes makes sense only when a virtual machine can run directly on the host CPUs (e.g. virtual machines of type kvm). The actual host CPU is irrelevant for virtual machines with emulated virtual CPUs (e.g. virtul machines of type qemu).

"},{"location":"user-guide/configuration/cluster-node-template/#update-on-boot","title":"Update on boot","text":"

v2.2.0 Default: true

By default, Kubitect updates all virtual machine packages on boot. To disable this behavior, set updateOnBoot to false.

cluster:\n  nodeTemplate:\n    updateOnBoot: false\n
"},{"location":"user-guide/configuration/cluster-node-template/#ssh-options","title":"SSH options","text":""},{"location":"user-guide/configuration/cluster-node-template/#custom-ssh-certificate","title":"Custom SSH certificate","text":"

v2.0.0

Kubitect automatically generates SSH certificates before deploying the cluster to ensure secure communication between nodes. The generated certificates can be found in the config/.ssh/ directory inside the cluster directory.

If you prefer to use a custom SSH certificate, you can specify the local path to the private key. Note that the public key must also be present in the same directory with the .pub suffix.

cluster:\n  nodeTemplate:\n    ssh:\n      privateKeyPath: \"~/.ssh/id_rsa_test\"\n

Important

SSH certificates must be passwordless, otherwise Kubespray will fail to configure the cluster.

"},{"location":"user-guide/configuration/cluster-node-template/#adding-nodes-to-the-known-hosts","title":"Adding nodes to the known hosts","text":"

v2.0.0 Default: false

Kubitect allows you to add all created virtual machines to SSH known hosts and remove them once the cluster is destroyed. To enable this behavior, set the addToKnownHosts property to true.

cluster:\n  nodeTemplate:\n    ssh:\n      addToKnownHosts: true\n
"},{"location":"user-guide/configuration/cluster-nodes/","title":"Cluster nodes","text":""},{"location":"user-guide/configuration/cluster-nodes/#cluster-nodes","title":"Cluster nodes","text":""},{"location":"user-guide/configuration/cluster-nodes/#background","title":"Background","text":"

Kubitect allows configuration of three distinct node types: worker nodes, master nodes (control plane), and load balancers.

"},{"location":"user-guide/configuration/cluster-nodes/#worker-nodes","title":"Worker nodes","text":"

Worker nodes in a Kubernetes cluster are responsible for executing the application workloads of the system. The addition of more worker nodes to the cluster enhances redundancy in case of worker node failure. However, allocating more resources to each worker node provides less overhead and more resources for the actual applications.

Kubitect does not offer automatic scaling of worker nodes based on resource demand. However, you can easily add or remove worker nodes by applying modified cluster configuration.

"},{"location":"user-guide/configuration/cluster-nodes/#master-nodes","title":"Master nodes","text":"

The master node plays a vital role in a Kubernetes cluster as it manages the overall state of the system and coordinates the workloads running on the worker nodes. Therefore, it is essential to configure at least one master node for every cluster.

Please note that Kubitect currently supports only a stacked control plane where etcd key-value stores are deployed on control plane nodes. To ensure the best possible fault tolerance, it is important to configure an odd number of control plane nodes. For more information, please refer to the etcd FAQ.

"},{"location":"user-guide/configuration/cluster-nodes/#load-balancer-nodes","title":"Load balancer nodes","text":"

In a Kubernetes cluster with multiple control plane nodes, it is necessary to configure at least one load balancer. A load balancer distributes incoming network traffic across multiple control plane nodes, ensuring the cluster operates normally even if any control plane node fails.

However, configuring only one load balancer represents a single point of failure for the cluster. If it fails, incoming traffic will not be distributed to the control plane nodes, potentially resulting in downtime. Therefore, configuring multiple load balancers is essential to ensure high availability for the cluster.

"},{"location":"user-guide/configuration/cluster-nodes/#nodes-configuration-structure","title":"Nodes configuration structure","text":"

The configuration structure for the nodes is as follows:

cluster:\n  nodes:\n    masters:\n      ...\n    workers:\n      ...\n    loadBalancers:\n      ...\n

Each node type has two subsections: default and instances. The instances subsection represents an array of actual nodes, while the default subsection provides the configuration that is applied to all instances of a particular node type. Each default value can also be overwritten by setting the same property for a specific instance.

cluster:\n  nodes:\n    <node-type>:\n      default:\n        ...\n      instances:\n        ...\n
"},{"location":"user-guide/configuration/cluster-nodes/#configuration","title":"Configuration","text":""},{"location":"user-guide/configuration/cluster-nodes/#common-node-properties","title":"Common node properties","text":"

Each node instance has a set of predefined properties that can be set to configure its behavior. Some properties apply to all node types, while others are specific to a certain node type. Properties that apply to all node types are referred to as common properties.

"},{"location":"user-guide/configuration/cluster-nodes/#instance-id","title":"Instance ID","text":"

v2.3.0 Required

Each node in a cluster must have a unique identifier, or ID, that distinguishes it from other instances of the same node type. The instance ID is used as a suffix for the name of each node, ensuring that each node has a unique name in the cluster.

cluster:\n  nodes:\n    <node-type>:\n      instances:\n        - id: 1\n        - id: compute-1\n        - id: 77\n
"},{"location":"user-guide/configuration/cluster-nodes/#cpu","title":"CPU","text":"

v2.0.0 Default: 2 vCPU

The cpu property defines the amount of virtual CPU cores assigned to a node instance. This property can be set for a specific instance, or as a default value for all instances of a certain node type.

cluster:\n  nodes:\n    <node-type>:\n      default:\n        cpu: 2\n      instances:\n        - id: 1 # (1)!\n        - id: 2\n          cpu: 4 # (2)!\n
  1. Since the cpu property is not set for this instance, the default value (2) is used.

  2. This instance has the cpu property set, and therefore the set value (4) overrides the default value (2).

If the property is not set at the instance level or as a default value, Kubitect uses its own default value (2).

cluster:\n  nodes:\n    <node-type>:\n      instances:\n        - id: 1 # (1)!\n
  1. Since the cpu property is not set at instance level or as a default value, Kubitect sets the value of the cpu property to 2 vCPU.
"},{"location":"user-guide/configuration/cluster-nodes/#ram","title":"RAM","text":"

v2.0.0 Default: 4 GiB

The ram property defines the amount of RAM assigned to a node instance (in GiB). This property can be set for a specific instance, or as a default value for all instances of a certain node type.

cluster:\n  nodes:\n    <node-type>:\n      default:\n        ram: 8\n      instances:\n        - id: 1 # (1)!\n        - id: 2\n          ram: 16 # (2)!\n
  1. Since the ram property is not set for this instance, the default value (8 GiB) is used.

  2. This instance has the ram property set, and therefore the set value (16 GiB) overrides the default value (8 GiB).

If the property is not set at the instance level or as a default value, Kubitect uses its own default value (4 GiB).

cluster:\n  nodes:\n    <node-type>:\n      instances:\n        - id: 1 # (1)!\n
  1. Since the ram property is not set at instance level or as a default value, Kubitect sets the value of the ram property to 4 GiB.
"},{"location":"user-guide/configuration/cluster-nodes/#main-disk-size","title":"Main disk size","text":"

v2.0.0 Default: 32 GiB

The mainDiskSize property defines the amount of disk space assigned to a node instance (in GiB). This property can be set for a specific instance, or as a default value for all instances of a certain node type.

cluster:\n  nodes:\n    <node-type>:\n      default:\n        mainDiskSize: 128\n      instances:\n        - id: 1 # (1)!\n        - id: 2\n          mainDiskSize: 256 # (2)!\n
  1. Since the mainDiskSize property is not set for this instance, the default value (128 GiB) is used.

  2. This instance has the mainDiskSize property set, so therefore the set value (256 GiB) overrides the default value (128 GiB).

If the property is not set at the instance level or as a default value, Kubitect uses its own default value (32 GiB).

cluster:\n  nodes:\n    <node-type>:\n      instances:\n        - id: 1 # (1)!\n
  1. Since the mainDiskSize property is not set at instance level or as a default value, Kubitect sets the value of the mainDiskSize property to 32 GiB.
"},{"location":"user-guide/configuration/cluster-nodes/#ip-address","title":"IP address","text":"

v2.0.0

Each node in a cluster can be assigned a static IP address to ensure a predictable and consistent IP address for the node. If no IP address is set for a particular node, Kubitect will request a DHCP lease for that node. Additionally, Kubitect checks whether all set IP addresses are within the defined network range, as explained in the Network CIDR section of the cluster network configuration.

cluster:\n  network:\n    mode: nat\n    cidr: 192.168.113.0/24\n  nodes:\n    <node-type>:\n      instances:\n        - id: 1\n          ip: 192.168.113.5 # (1)!\n        - id: 2 # (2)!\n
  1. A static IP (192.168.113.5) is set for this instance.

  2. Since no IP address is defined for this instance, a DHCP lease is requested.

"},{"location":"user-guide/configuration/cluster-nodes/#mac-address","title":"MAC address","text":"

v2.0.0

The virtual machines created by Kubitect are assigned generated MAC addresses, but a custom MAC address can be set for a virtual machine if necessary.

cluster:\n  nodes:\n    <node-type>:\n      instances:\n        - id: 1\n          mac: \"52:54:00:00:13:10\" # (1)!\n        - id: 2 # (2)!\n
  1. A custom MAC address (52:54:00:00:13:10) is set for this instance.

  2. Since no MAC address is defined for this instance, the MAC address is generated during cluster creation.

"},{"location":"user-guide/configuration/cluster-nodes/#host-affinity","title":"Host affinity","text":"

v2.0.0

By default, all instances in a cluster are deployed on the default host. However, by specifying a specific host for an instance, you can control where that instance is deployed

hosts:\n  - name: host1\n    ...\n  - name: host2\n    default: true\n    ...\n\ncluster:\n  nodes:\n    <node-type>:\n      instances:\n        - id: 1\n          host: host1 # (1)!\n        - id: 2 # (2)!\n
  1. The instance is deployed on host1.

  2. Since no host is specified, the instance is deployed on the default host (host2).

"},{"location":"user-guide/configuration/cluster-nodes/#control-plane-and-worker-node-properties","title":"Control plane and worker node properties","text":"

The following properties can only be configured for control plane or worker nodes.

"},{"location":"user-guide/configuration/cluster-nodes/#data-disks","title":"Data disks","text":"

v2.2.0

By default, only a main disk (volume) is attached to each provisioned virtual machine. Since the main disk already contains an operating system, it may not be suitable for storing data, and additional disks may be required. For example, Rook can be easily configured to use all the empty disks attached to the virtual machine to form a storage cluster.

A name and size (in GiB) must be configured for each data disk. By default, data disks are created in the main resource pool. To create a data disk in a custom data resource pool, you can set the pool property to the name of the desired data resource pool. Additionally, note that the data disk name must be unique among all data disks for a given instance.

cluster:\n  nodes:\n    <node-type>:\n      instances:\n        - id: 1\n          dataDisks:\n            - name: data-volume\n              pool: main # (1)!\n              size: 256\n            - name: rook-volume\n              pool: rook-pool # (2)!\n              size: 512\n
  1. When pool property is omitted or set to main, the data disk is created in the main resource pool.

  2. Custom data resource pool must be configured in the hosts section.

"},{"location":"user-guide/configuration/cluster-nodes/#node-labels","title":"Node labels","text":"

v2.1.0

With node labels, you can help organize and manage your cluster by associating nodes with specific attributes or roles, and by grouping nodes for specific workloads or tasks.

Node labels are used to label actual Kubernetes nodes and can be set for a specific instance or as a default value for all instances. It is important to note that labels set at the instance level are merged with the default labels. However, if labels have the same key, then the labels set at the instance level take precedence over the default labels.

cluster:\n  nodes:\n    <node-type>: # (1)!\n      default:\n        labels:\n          key1: def-value-1\n          key2: def-value-2\n      instances:\n        - id: 1\n          labels: # (2)!\n            key1: custom-value\n        - id: 2\n          labels: # (3)!\n            key3: super-node\n
  1. Node labels can only be applied to worker and master (control plane) nodes.

  2. Labels defined at the instance level take precedence over default labels. As a result, the following labels are applied to this instance:

    • key1: custom-value
    • key2: def-value-2
  3. Labels defined at the instance level are merged with default labels. As a result, the following labels are applied to this instance:

    • key1: def-value-1
    • key2: def-value-2
    • key3: super-node
"},{"location":"user-guide/configuration/cluster-nodes/#node-taints","title":"Node taints","text":"

v2.2.0

With node taints, you can limit which pods can be scheduled to run on a particular node, and help ensure that the workload running on that node is appropriate for its capabilities and resources.

Node taints are configured as a list of strings in the format key=value:effect. Taints can be set for a specific instance or as a default value for all instances. When taints are set for a particular instance, they are merged with the default taints, and any duplicate entries are removed.

cluster:\n  nodes:\n    <node-type>: # (1)!\n      default:\n        taints:\n          - \"key1=value1:NoSchedule\"\n      instances:\n        - id: 1\n          taints:\n            - \"key2=value2:NoExecute\"\n
  1. Node taints can only be applied to control plane (master) and worker nodes.
"},{"location":"user-guide/configuration/cluster-nodes/#load-balancer-properties","title":"Load balancer properties","text":"

The following properties can only be configured for load balancers.

"},{"location":"user-guide/configuration/cluster-nodes/#virtual-ip-address-vip","title":"Virtual IP address (VIP)","text":"

v2.0.0

What is VIP?

Load balancers are responsible for distributing traffic to the control plane nodes. However, a single load balancer can cause issues if it fails. To avoid this, multiple load balancers can be configured with one as the primary, actively serving incoming traffic, while others act as secondary and take over the primary position only if the primary load balancer fails. If a secondary load balancer becomes primary, it should still be reachable via the same IP, which is referred to as a virtual or floating IP (VIP).

When multiple load balancers are configured, an unused IP address within the configured network must be specified as the VIP.

cluster:\n  nodes:\n    loadBalancer:\n      vip: 168.192.113.200\n
"},{"location":"user-guide/configuration/cluster-nodes/#virtual-router-id-vrid","title":"Virtual router ID (VRID)","text":"

v2.1.0 Default: 51

When a cluster is created with a VIP, Kubitect configures Virtual Router Redundancy Protocol (VRRP), which provides failover for load balancers. Each VRRP group is identified by a virtual router ID (VRID), which can be any number between 0 and 255. Since there can be only one master in each group, two groups cannot have the same ID.

By default, Kubitect sets the VRID to 51, but if you set up multiple clusters that use VIP, you must ensure that the VRID is different for each cluster.

cluster:\n  nodes:\n    loadBalancer:\n      vip: 168.192.113.200\n      virtualRouterId: 30\n
"},{"location":"user-guide/configuration/cluster-nodes/#priority","title":"Priority","text":"

v2.1.0 Default: 10

Each load balancer has a priority that is used to select a primary load balancer. The one with the highest priority becomes the primary and all others become secondary. If the primary load balancer fails, the next one with the highest priority takes over. If two load balancers have the same priority, the one with the higher sum of IP address digits is selected.

The priority can be any number between 0 and 255. The default priority is 10.

cluster:\n  nodes:\n    loadBalancer:\n      instances:\n        - id: 1 # (1)!\n        - id: 2\n          priority: 200 # (2)!\n
  1. Since the load balancer priority for this instance is not specified, it is set to 10.

  2. Since this load balancer instance has the highest priority (200 > 10), it becomes the primary load balancer.

"},{"location":"user-guide/configuration/cluster-nodes/#port-forwarding","title":"Port forwarding","text":"

v2.1.0

By default, each configured load balancer has a port forwarding rule that distribute incoming traffic on port 6443 across the available control plane nodes. However, Kubitect provides the flexibility to configure additional user-defined port forwarding rules.

The following properties can be configured for each rule:

  • name - A unique port identifier.
  • port - The incoming port on which the load balancer listens for traffic.
  • targetPort - The port to which traffic is forwarded by the load balancer.
  • target - The group of nodes to which traffic is directed. The possible targets are:
    • masters - control plane nodes
    • workers - worker nodes
    • all - worker and control plane nodes.

Every port forwarding rule must be configured with a unique name and port. The name serves as a unique identifier for the rule, while the port specifies the incoming port on which the load balancer listens for traffic.

The target and targetPort configurations are optional. If target port is not explicitly set, it will default to the same value as the incoming port. Similarly, if target is not set, incoming traffic is automatically distributed across worker nodes.

cluster:\n  nodes:\n    loadBalancer:\n      forwardPorts:\n        - name: https\n          port: 443 # (1)!\n          targetPort: 31200 # (2)!\n          target: all # (3)!\n
  1. Incoming port is the port on which a load balancer listens for incoming traffic. It can be any number between 1 and 65353, excluding ports 6443 (Kubernetes API server) and 22 (SSH).

  2. Target port is the port on which the traffic is forwarded. By default, it is set to the same value as the incoming port.

  3. Target represents a group of nodes to which incoming traffic is forwarded. Possible values are:

    • masters
    • workers
    • all

    If the target is not configured, it defaults to the workers.

"},{"location":"user-guide/configuration/cluster-nodes/#example-usage","title":"Example usage","text":""},{"location":"user-guide/configuration/cluster-nodes/#set-a-role-to-all-worker-nodes","title":"Set a role to all worker nodes","text":"

By default, worker nodes in a Kubernetes cluster are not assigned any roles (<none>). To set the role of all worker nodes in the cluster, the default label with the key node-role.kubernetes.io/node can be configured.

cluster:\n  nodes:\n    worker:\n      default:\n        labels:\n          node-role.kubernetes.io/node: # (1)!\n      instances:\n        ...\n
  1. If the label value is omitted, null is set as the label value.

The roles of the nodes in a Kubernetes cluster can be viewed using kubectl get nodes.

NAME                   STATUS   ROLES                  AGE   VERSION\nk8s-cluster-master-1   Ready    control-plane,master   19m   v1.27.5\nk8s-cluster-worker-1   Ready    node                   19m   v1.27.5\nk8s-cluster-worker-2   Ready    node                   19m   v1.27.5\n
"},{"location":"user-guide/configuration/cluster-nodes/#load-balance-http-requests","title":"Load balance HTTP requests","text":"

Kubitect enables users to define custom port forwarding rules on load balancers. For example, to distribute HTTP and HTTPS requests across all worker nodes, at least one load balancer must be specified and port forwarding must be configured as follows:

cluster:\n  nodes:\n    loadBalancer:\n      forwardPorts:\n        - name: http\n          port: 80\n        - name: https\n          port: 443\n      instances:\n        - id: 1\n
"},{"location":"user-guide/configuration/hosts/","title":"Hosts","text":"

Defining hosts is an essential step when deploying a Kubernetes cluster with Kubitect. Hosts represent the target servers where the cluster will be deployed.

Every valid configuration must contain at least one host, which can be either local or remote. However, you can add as many hosts as needed to support your cluster deployment.

"},{"location":"user-guide/configuration/hosts/#hosts-configuration","title":"Hosts configuration","text":""},{"location":"user-guide/configuration/hosts/#configuration","title":"Configuration","text":""},{"location":"user-guide/configuration/hosts/#localhost","title":"Localhost","text":"

v2.0.0

To configure a local host, you simply need to specify a host with the connection type set to local.

hosts:\n  - name: localhost # (1)!\n    connection:\n      type: local\n
  1. Custom unique name of the host.
"},{"location":"user-guide/configuration/hosts/#remote-hosts","title":"Remote hosts","text":"

v2.0.0

To configure a remote host, you need to set the connection type to remote and provide the IP address of the remote host, along with its SSH credentials.

hosts:\n  - name: my-remote-host\n    connection:\n      type: remote\n      user: myuser\n      ip: 10.10.40.143 # (1)!\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_server1\" # (2)!\n
  1. IP address of the remote host.

  2. Path to the password-less SSH key file required for establishing connection with the remote host. Default is ~/.ssh/id_rsa.

"},{"location":"user-guide/configuration/hosts/#hosts-ssh-port","title":"Host's SSH port","text":"

v2.0.0 Default: 22

By default, SSH uses port 22. If a host is running an SSH client on a different port, you can change the port for each host separately.

hosts:\n  - name: remote-host\n    connection:\n      type: remote\n      ssh:\n        port: 1234\n
"},{"location":"user-guide/configuration/hosts/#host-verification-known-ssh-hosts","title":"Host verification (known SSH hosts)","text":"

v2.0.0 Default: false

By default, remote hosts are not verified in the known SSH hosts. If you want to verify hosts, you can enable host verification for each host separately.

hosts:\n  - name: remote-host\n    connection:\n      type: remote\n      ssh:\n        verify: true\n
"},{"location":"user-guide/configuration/hosts/#default-host","title":"Default host","text":"

v2.0.0

If a host is specified as the default, all instances that do not point to a specific host are deployed to that default host. If no default host is specified, these instances are deployed on the first host in the list.

hosts:\n  - name: localhost\n    connection:\n      type: local\n  - name: default-host\n    default: true\n    ...\n
"},{"location":"user-guide/configuration/hosts/#main-resource-pool","title":"Main resource pool","text":"

v2.0.0 Default: /var/lib/libvirt/images/

The main resource pool path specifies the location on the host where main virtual disks (volumes) are created for each node provisioned on that particular host. Because the main resource pool contains volumes on which the node's operating system and all required packages are installed, it's recommended that the main resource pool is created on fast storage devices, such as SSD disks.

hosts:\n  - name: host1 # (1)!\n  - name: host2\n    mainResourcePoolPath: /mnt/ssd/kubitect/ # (2)!\n
  1. Because the main resource pool path for this host is not set, the default path (/var/lib/libvirt/images/) is used.

  2. The main resource pool path is set for this host, so the node's main disks are created in this location.

"},{"location":"user-guide/configuration/hosts/#data-resource-pools","title":"Data resource pools","text":"

v2.0.0

Data resource pools allow you to define additional resource pools, besides the required main resource pool. These pools can be used to attach additional virtual disks that can be used for various storage solutions, such as Rook or MinIO.

Multiple data resource pools can be defined on each host, and each pool must have a unique name on that host. The name of the data resource pool is used to associate the virtual disks defined in the node configuration with the actual data resource pool.

By default, the path of the data resources is set to /var/lib/libvirt/images, but it can be easily configured using the path property.

hosts:\n  - name: host1\n    dataResourcePools:\n      - name: rook-pool\n        path: /mnt/hdd/kubitect/pools/\n      - name: data-pool # (1)!\n
  1. If the path of the resource pool is not specified, it will be created under the path /var/lib/libvirt/images/.
"},{"location":"user-guide/configuration/hosts/#example-usage","title":"Example usage","text":""},{"location":"user-guide/configuration/hosts/#multiple-hosts","title":"Multiple hosts","text":"

Kubitect allows you to deploy a cluster on multiple hosts, which need to be specified in the configuration file.

hosts:\n  - name: localhost\n    connection:\n      type: local\n  - name: remote-host-1\n    connection:\n      type: remote\n      user: myuser\n      ip: 10.10.40.143\n      ssh:\n        port: 123\n        keyfile: \"~/.ssh/id_rsa_server1\"\n  - name: remote-host-2\n    default: true\n    connection:\n      type: remote\n      user: myuser\n      ip: 10.10.40.145\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_server2\"\n  ...\n
"},{"location":"user-guide/configuration/kubernetes/","title":"Kubernetes","text":"

The Kubernetes section of the configuration file contains properties that are specific to Kubernetes, such as the Kubernetes version and network plugin.

"},{"location":"user-guide/configuration/kubernetes/#kubernetes-configuration","title":"Kubernetes configuration","text":""},{"location":"user-guide/configuration/kubernetes/#configuration","title":"Configuration","text":""},{"location":"user-guide/configuration/kubernetes/#kubernetes-version","title":"Kubernetes version","text":"

v3.0.0 Default: v1.27.5

By default, the Kubernetes cluster will be deployed using version v1.27.5, but you can specify a different version if necessary.

kubernetes:\n  version: v1.27.5\n

The supported Kubernetes versions include v1.25, v1.26, and v1.27.

"},{"location":"user-guide/configuration/kubernetes/#kubernetes-network-plugin","title":"Kubernetes network plugin","text":"

v2.0.0 Default: calico

The calico network plugin is deployed by default in a Kubernetes cluster. However, there are multiple supported network plugins available to choose from:

  • calico
  • cilium
  • flannel
  • kube-router
  • weave
kubernetes:\n  networkPlugin: flannel\n

The following table shows the compatibility matrix of supported network plugins and Kubernetes versions:

Kubernetes Version Calico Cilium Flannel KubeRouter Weave 1.25 1.26 1.27"},{"location":"user-guide/configuration/kubernetes/#kubernetes-dns-mode","title":"Kubernetes DNS mode","text":"

v2.0.0 Default: coredns

Currently, the only DNS mode supported by Kubitect is coredns. Therefore, it is safe to omit this property.

kubernetes:\n  dnsMode: coredns\n
"},{"location":"user-guide/configuration/kubernetes/#copy-kubeconfig","title":"Copy kubeconfig","text":"

v2.0.0 Default: false

Kubitect offers the option to automatically copy the Kubeconfig file to the ~/.kube/config path. By default, this feature is disabled to prevent overwriting an existing file.

kubernetes:\n  other:\n    copyKubeconfig: true\n
"},{"location":"user-guide/configuration/kubernetes/#auto-renew-control-plane-certificates","title":"Auto renew control plane certificates","text":"

v2.2.0 Default: false

Control plane certificates are renewed every time the cluster is upgraded, and their validity period is one year. However, in rare cases, clusters that are not upgraded frequently may experience issues. To address this, you can enable the automatic renewal of control plane certificates on the first Monday of each month by setting the autoRenewCertificates property to true.

kubernetes:\n  other:\n    autoRenewCertificates: true\n
"},{"location":"user-guide/management/destroying/","title":"Destroying the cluster","text":""},{"location":"user-guide/management/destroying/#destroying-the-cluster","title":"Destroying the cluster","text":""},{"location":"user-guide/management/destroying/#destroy-the-cluster","title":"Destroy the cluster","text":"

Important

This action is irreversible and any data stored within the cluster will be lost.

To destroy a specific cluster, simply run the destroy command, specifying the name of the cluster to be destroyed.

kubitect destroy --cluster my-cluster\n

Keep in mind that this action will permanently remove all resources associated with the cluster, including virtual machines, resource pools and configuration files.

"},{"location":"user-guide/management/scaling/","title":"Scaling the cluster","text":"

Any cluster created with Kubitect can be subsequently scaled. To do so, simply change the configuration and reapply it using the scale action.

Info

Currently, only worker nodes and load balancers can be scaled.

"},{"location":"user-guide/management/scaling/#scaling-the-cluster","title":"Scaling the cluster","text":""},{"location":"user-guide/management/scaling/#export-the-cluster-configuration","title":"Export the cluster configuration","text":"

Exporting the current cluster configuration is optional, but strongly recommended to ensure that changes are made to the latest version of the configuration. The cluster configuration file can be exported using the export command.

kubitect export config --cluster my-cluster > cluster.yaml\n
"},{"location":"user-guide/management/scaling/#scale-the-cluster","title":"Scale the cluster","text":"

In the configuration file, add new or remove existing nodes.

cluster.yaml
cluster:\n  ...\n  nodes:\n    ...\n    worker:\n      instances:\n        - id: 1\n        #- id: 2 # Worker node to be removed\n        - id: 3 # New worker node\n        - id: 4 # New worker node\n

Apply the modified configuration with action set to scale:

kubitect apply --config cluster.yaml --action scale\n

As a result, the worker node with ID 2 is removed and the worker nodes with IDs 3 and 4 are added to the cluster.

"},{"location":"user-guide/management/upgrading/","title":"Upgrading the cluster","text":"

A running Kubernetes cluster can be upgraded to a higher version by increasing the Kubernetes version in the cluster's configuration file and reapplying it using the upgrade action.

"},{"location":"user-guide/management/upgrading/#upgrading-the-cluster","title":"Upgrading the cluster","text":""},{"location":"user-guide/management/upgrading/#export-the-cluster-configuration","title":"Export the cluster configuration","text":"

Exporting the current cluster configuration is optional, but strongly recommended to ensure that changes are made to the latest version of the configuration. The cluster configuration file can be exported using the export command.

kubitect export config --cluster my-cluster > cluster.yaml\n
"},{"location":"user-guide/management/upgrading/#upgrade-the-cluster","title":"Upgrade the cluster","text":"

In the cluster configuration file, change the Kubernetes version.

cluster.yaml
kubernetes:\n  version: v1.24.5 # Old value: v1.23.6\n  ...\n

Apply the modified configuration using upgrade action.

kubitect apply --config cluster.yaml --action upgrade\n

The cluster is upgraded using the in-place strategy, i.e., the nodes are upgraded one after the other, making each node unavailable for the duration of its upgrade.

"},{"location":"user-guide/reference/cli/","title":"CLI tool reference","text":"

This document contains a reference of the Kubitect CLI tool. It documents each command along with its flags.

Tip

All available commands can be displayed by running kubitect --help or simply kubitect -h.

To see the help for a particular command, run kubitect command -h.

"},{"location":"user-guide/reference/cli/#cli-reference","title":"CLI reference","text":""},{"location":"user-guide/reference/cli/#kubitect-commands","title":"Kubitect commands","text":""},{"location":"user-guide/reference/cli/#kubitect-apply","title":"kubitect apply","text":"

Apply the cluster configuration.

Usage

kubitect apply [flags]\n

Flags

  • -a, --action <string> \u2003 cluster action: create | scale | upgrade (default: create)
  • --auto-approve \u2003 automatically approve any user permission requests
  • -c, --config <string> \u2003 path to the cluster config file
  • -l, --local \u2003 use a current directory as the cluster path
"},{"location":"user-guide/reference/cli/#kubitect-destroy","title":"kubitect destroy","text":"

Destroy the cluster with a given name. Executing the following command will permanently delete all resources associated with the cluster, including virtual machines and configuration files.

Important

Please be aware that this action is irreversible and any data stored within the cluster will be lost.

Usage

kubitect destroy [flags]\n

Flags

  • --auto-approve \u2003 automatically approve any user permission requests
  • --cluster <string> \u2003 name of the cluster to be used (default: default)
"},{"location":"user-guide/reference/cli/#kubitect-export-config","title":"kubitect export config","text":"

Print cluster's configuration file to the standard output.

Usage

kubitect export config [flags]\n

Flags

  • --cluster <string> \u2003 name of the cluster to be used (default: default)
"},{"location":"user-guide/reference/cli/#kubitect-export-kubeconfig","title":"kubitect export kubeconfig","text":"

Print cluster's kubeconfig to the standard output.

Usage

kubitect export kubeconfig [flags]\n

Flags

  • --cluster <string> \u2003 name of the cluster to be used (default: default)
"},{"location":"user-guide/reference/cli/#kubitect-export-preset","title":"kubitect export preset","text":"

Print cluster configuration preset to the standard output.

Usage

kubitect export preset [flags]\n

Flags

  • --name <string> \u2003 preset name
"},{"location":"user-guide/reference/cli/#kubitect-list-clusters","title":"kubitect list clusters","text":"

List clusters.

Usage

kubitect list clusters\n
"},{"location":"user-guide/reference/cli/#kubitect-list-presets","title":"kubitect list presets","text":"

List available cluster configuration presets.

Usage

kubitect list presets\n
"},{"location":"user-guide/reference/cli/#autogenerated-commands","title":"Autogenerated commands","text":""},{"location":"user-guide/reference/cli/#kubitect-completion","title":"kubitect completion","text":"

Generate the autocompletion script for Kubitect for the specified shell.

Usage

kubitect completion [command]\n

Commands

  • bash \u2003 Generate the autocompletion script for bash.
  • fish \u2003 Generate the autocompletion script for fish.
  • zsh \u2003 Generate the autocompletion script for zsh.

Tip

Run kubitect completion shell -h for instructions how to add autocompletion for a specific shell.

"},{"location":"user-guide/reference/cli/#kubitect-help","title":"kubitect help","text":"

Help provides help for any command in the application. Simply type kubitect help [path to command] for full details.

Usage

kubitect help [command]\n

or

kubitect [command] -h\n
"},{"location":"user-guide/reference/cli/#other","title":"Other","text":""},{"location":"user-guide/reference/cli/#version-flag","title":"Version flag","text":"

Print Kubitect CLI tool version.

Usage

kubitect --version\n

or

kubitect -v\n
"},{"location":"user-guide/reference/cli/#debug-flag","title":"Debug flag","text":"

Enable debug messages. This can be especially handy with the apply command.

Usage

kubitect [command] --debug\n
"},{"location":"user-guide/reference/configuration/","title":"Configuration reference","text":"

This document contains a reference of the Kubitect configuration file and documents all possible configuration properties.

The configuration sections are as follows:

  • hosts - A list of physical hosts (local or remote).
  • cluster - Configuration of the cluster infrastructure. Virtual machine properties, node types to install, and the host on which to install the nodes.
  • kubernetes - Kubernetes configuration.
  • addons - Configurable addons and applications.

Each configuration property is documented with 5 columns: Property name, description, type, default value and is the property required.

Note

[*] annotates an array.

"},{"location":"user-guide/reference/configuration/#configuration-reference","title":"Configuration reference","text":""},{"location":"user-guide/reference/configuration/#hosts-section","title":"Hosts section","text":"Name Type Default value Required? Description hosts[*].connection.ip string Yes, if connection.type is set to remote IP address is used to SSH into the remote machine. hosts[*].connection.ssh.keyfile string ~/.ssh/id_rsa Path to the keyfile that is used to SSH into the remote machine hosts[*].connection.ssh.port number 22 The port number of SSH protocol for remote machine. hosts[*].connection.ssh.verify boolean false If true, the SSH host is verified, which means that the host must be present in the known SSH hosts. hosts[*].connection.type string Yes Possible values are:
  • local or localhost
  • remote
hosts[*].connection.user string Yes, if connection.type is set to remote Username is used to SSH into the remote machine. hosts[*].dataResourcePools[*].name string Name of the data resource pool. Must be unique within the same host. It is used to link virtual machine volumes to the specific resource pool. hosts[*].dataResourcePools[*].path string /var/lib/libvirt/images/ Host path to the location where data resource pool is created. hosts[*].default boolean false Nodes where host is not specified will be installed on default host. The first host in the list is used as a default host if none is marked as a default. hosts[*].name string Yes Custom server name used to link nodes with physical hosts. hosts[*].mainResourcePoolPath string /var/lib/libvirt/images/ Path to the resource pool used for main virtual machine volumes."},{"location":"user-guide/reference/configuration/#cluster-section","title":"Cluster section","text":"Name Type Default value Required? Description cluster.name string Yes Custom cluster name that is used as a prefix for various cluster components. Note: cluster name cannot contain prefix local. cluster.network.bridge string virbr0 By default virbr0 is set as a name of virtual bridge. In case network mode is set to bridge, name of the preconfigured bridge needs to be set here. cluster.network.cidr string Yes Network cidr that contains network IP with network mask bits (IPv4/mask_bits). cluster.network.gateway string First client IP in network. By default first client IP is taken as a gateway. If network cidr is set to 10.0.0.0/24 then gateway would be 10.0.0.1. Set gateway if it differs from default value. cluster.network.mode string Yes Network mode. Possible values are:
  • nat - Creates virtual local network.
  • bridge - Uses preconfigured bridge interface on the machine (Only bridge mode supports multiple hosts).
  • route - Creates virtual local network, but does not apply NAT.
cluster.nodes.loadBalancer.default.cpu number 2 Default number of vCPU allocated to a load balancer instance. cluster.nodes.loadBalancer.default.mainDiskSize number 32 Size of the main disk (in GiB) that is attached to a load balancer instance. cluster.nodes.loadBalancer.default.ram number 4 Default amount of RAM (in GiB) allocated to a load balancer instance. cluster.nodes.loadBalancer.forwardPorts[*].name string Yes, if port is configured Unique name of the forwarded port. cluster.nodes.loadBalancer.forwardPorts[*].port number Yes, if port is configured Incoming port is the port on which a load balancer listens for the incoming traffic. cluster.nodes.loadBalancer.forwardPorts[*].targetPort number Incoming port value Target port is the port on which a load balancer forwards traffic. cluster.nodes.loadBalancer.forwardPorts[*].target string workers Target is a group of nodes on which a load balancer forwards traffic. Possible targets are:
  • masters
  • workers
  • all
cluster.nodes.loadBalancer.instances[*].cpu number Overrides a default value for that specific instance. cluster.nodes.loadBalancer.instances[*].host string Name of the host on which the instance is deployed. If the name is not specified, the instance is deployed on the default host. cluster.nodes.loadBalancer.instances[*].id string Yes Unique identifier of a load balancer instance. cluster.nodes.loadBalancer.instances[*].ip string If an IP is set for an instance then the instance will use it as a static IP. Otherwise it will try to request an IP from a DHCP server. cluster.nodes.loadBalancer.instances[*].mac string MAC used by the instance. If it is not set, it will be generated. cluster.nodes.loadBalancer.instances[*].mainDiskSize number Overrides a default value for that specific instance. cluster.nodes.loadBalancer.instances[*].priority number 10 Keepalived priority of the load balancer. A load balancer with the highest priority becomes the leader (active). The priority can be set to any number between 0 and 255. cluster.nodes.loadBalancer.instances[*].ram number Overrides a default value for the RAM for that instance. cluster.nodes.loadBalancer.vip string Yes, if more then one instance of load balancer is specified. Virtual IP (floating IP) is the static IP used by load balancers to provide a fail-over. Each load balancer still has its own IP beside the shared one. cluster.nodes.loadBalancer.virtualRouterId number 51 Virtual router ID identifies the group of VRRP routers. It can be any number between 0 and 255 and should be unique among different clusters. cluster.nodes.master.default.cpu number 2 Default number of vCPU allocated to a master node. cluster.nodes.master.default.labels dictionary Array of default node labels that are applied to all master nodes. cluster.nodes.master.default.mainDiskSize number 32 Size of the main disk (in GiB) that is attached to a master node. cluster.nodes.master.default.ram number 4 Default amount of RAM (in GiB) allocated to a master node. cluster.nodes.master.default.taints list List of default node taints that are applied to all master nodes. cluster.nodes.master.instances[*].cpu number Overrides a default value for that specific instance. cluster.nodes.master.instances[*].dataDisks[*].name string Name of the additional data disk that is attached to the master node. cluster.nodes.master.instances[*].dataDisks[*].pool string main Name of the data resource pool where the additional data disk is created. Referenced resource pool must be configure on the same host. cluster.nodes.master.instances[*].dataDisks[*].size string Size of the additional data disk (in GiB) that is attached to the master node. cluster.nodes.master.instances[*].host string Name of the host on which the instance is deployed. If the name is not specified, the instance is deployed on the default host. cluster.nodes.master.instances[*].id string Yes Unique identifier of a master node. cluster.nodes.master.instances[*].ip string If an IP is set for an instance then the instance will use it as a static IP. Otherwise it will try to request an IP from a DHCP server. cluster.nodes.master.instances[*].labels dictionary Array of node labels that are applied to this specific master node. cluster.nodes.master.instances[*].mac string MAC used by the instance. If it is not set, it will be generated. cluster.nodes.master.instances[*].mainDiskSize number Overrides a default value for that specific instance. cluster.nodes.master.instances[*].ram number Overrides a default value for the RAM for that instance. cluster.nodes.master.instances[*].taints list List of node taints that are applied to this specific master node. cluster.nodes.worker.default.cpu number 2 Default number of vCPU allocated to a worker node. cluster.nodes.worker.default.labels dictionary Array of default node labels that are applied to all worker nodes. cluster.nodes.worker.default.mainDiskSize number 32 Size of the main disk (in GiB) that is attached to a worker node. cluster.nodes.worker.default.ram number 4 Default amount of RAM (in GiB) allocated to a worker node. cluster.nodes.worker.default.taints list List of default node taints that are applied to all worker nodes. cluster.nodes.worker.instances[*].cpu number Overrides a default value for that specific instance. cluster.nodes.worker.instances[*].dataDisks[*].name string Name of the additional data disk that is attached to the worker node. cluster.nodes.worker.instances[*].dataDisks[*].pool string main Name of the data resource pool where the additional data disk is created. Referenced resource pool must be configure on the same host. cluster.nodes.worker.instances[*].dataDisks[*].size string Size of the additional data disk (in GiB) that is attached to the worker node. cluster.nodes.worker.instances[*].host string Name of the host on which the instance is deployed. If the name is not specified, the instance is deployed on the default host. cluster.nodes.worker.instances[*].id string Yes Unique identifier of a worker node. cluster.nodes.worker.instances[*].ip string If an IP is set for an instance then the instance will use it as a static IP. Otherwise it will try to request an IP from a DHCP server. cluster.nodes.worker.instances[*].labels dictionary Array of node labels that are applied to this specific worker node. cluster.nodes.worker.instances[*].mac string MAC used by the instance. If it is not set, it will be generated. cluster.nodes.worker.instances[*].mainDiskSize number Overrides a default value for that specific instance. cluster.nodes.worker.instances[*].ram number Overrides a default value for the RAM for that instance. cluster.nodes.worker.instances[*].taints list List of node taints that are applied to this specific worker node. cluster.nodeTemplate.cpuMode string custom Guest virtual machine CPU mode. cluster.nodeTemplate.dns list Value of network.gateway Custom DNS list used by all created virtual machines. If none is provided, network gateway is used. cluster.nodeTemplate.os.distro string ubuntu Set OS distribution. Possible values are:
  • ubuntu
  • ubuntu22
  • ubuntu20
  • debian
  • debian11
  • rocky
  • rocky9
  • centos
  • centos9
cluster.nodeTemplate.os.networkInterface string Depends on os.distro Network interface used by virtual machines to connect to the network. Network interface is preconfigured for each OS image (usually ens3 or eth0). By default, the value from distro preset (/terraform/defaults.yaml) is set, but can be overwritten if needed. cluster.nodeTemplate.os.source string Depends on os.distro Source of an OS image. It can be either path on a local file system or an URL of the image. By default, the value from distro preset (/terraform/defaults.yaml)isset, but can be overwritten if needed. cluster.nodeTemplate.ssh.addToKnownHosts boolean false If set to true, each virtual machine will be added to the known hosts on the machine where the project is being run. Note that all machines will also be removed from known hosts when destroying the cluster. cluster.nodeTemplate.ssh.privateKeyPath string Path to private key that is later used to SSH into each virtual machine. On the same path with .pub prefix needs to be present public key. If this value is not set, SSH key will be generated in ./config/.ssh/ directory. cluster.nodeTemplate.updateOnBoot boolean true If set to true, the operating system will be updated when it boots. cluster.nodeTemplate.user string k8s User created on each virtual machine."},{"location":"user-guide/reference/configuration/#kubernetes-section","title":"Kubernetes section","text":"Name Type Default value Required? Description kubernetes.dnsMode string coredns DNS server used within a Kubernetes cluster. Possible values are:
  • coredns
kubernetes.networkPlugin string calico Network plugin used within a Kubernetes cluster. Possible values are:
  • calico
  • cilium
  • flannel
  • kube-router
  • weave
kubernetes.other.autoRenewCertificates boolean false When this property is set to true, control plane certificates are renewed first Monday of each month. kubernetes.other.copyKubeconfig boolean false When this property is set to true, the kubeconfig of a new cluster is copied to the ~/.kube/config. Please note that setting this property to true may cause the existing file at the destination to be overwritten. kubernetes.version string v1.27.5 Kubernetes version that will be installed."},{"location":"user-guide/reference/configuration/#addons-section","title":"Addons section","text":"Name Type Default value Required? Description addons.kubespray dictionary Kubespray addons configuration. addons.rook.enabled boolean false Enable Rook addon. addons.rook.nodeSelector dictionary Dictionary containing node labels (\"key: value\"). Rook is deployed on the nodes that match all the given labels. addons.rook.version string Rook version. By default, the latest release version is used."}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"examples/accessing-cluster/","title":"Accessing the cluster","text":"

Cloud providers that support Kubernetes clusters typically provide load balancer provisioning on demand. By setting a Service type to LoadBalancer, an external load balancer is automatically provisioned with its own unique IP address. This load balancer redirects all incoming connections to the Service, as illustrated in the figure below.

In on-premise environments, there is no load balancer that can be provisioned on demand. Therefore, some alternative solutions are explained in this document.

"},{"location":"examples/accessing-cluster/#accessing-the-cluster","title":"Accessing the cluster","text":""},{"location":"examples/accessing-cluster/#node-ports","title":"Node ports","text":"

Setting Service type to NodePort makes Kubernetes reserve a port on all its nodes. As a result, the Service becomes available on <NodeIP>:<NodePort>, as shown in the figure below.

When using NodePort, it does not matter to which node a client sends the request, since it is routed internally to the appropriate Pod. However, if all traffic is directed to a single node, its failure will make the Service unavailable.

"},{"location":"examples/accessing-cluster/#self-provisioned-edge","title":"Self-provisioned edge","text":"

With Kubitect, it is possible to configure the port forwarding of the load balancer to distribute incoming requests to multiple nodes in the cluster, as shown in the figure below.

To set up load balancer port forwarding, at least one load balancer must be configured. The following example shows how to set up load balancer port forwarding for ports 80 (HTTP) and 443 (HTTPS).

cluster:\n  nodes:\n    loadBalancer:\n      forwardPorts:\n        - name: http\n          port: 80\n        - name: https\n          port: 443\n      instances:\n        - id: 1\n

Load balancer port forwarding is particularly handy when combined with a NodePort Service or a Service whose ports are exposed on the host. For example, for HTTP and HTTPS traffic an Ingress is most often used. To use Ingress resources in the Kubernetes cluster, an ingress controller is required. With Kubitect, a load balancer can be configured to accept connections on ports 80 and 443, and redirect them to all cluster nodes on ports 50080 and 50443 where an ingress controller is listening for incoming requests. The following code snippet shows the configuration for such a scenario.

cluster:\n  nodes:\n    loadBalancer:\n      forwardPorts:\n        - name: http\n          port: 80\n          targetPort: 50080\n          target: workers # (1)!\n        - name: https\n          port: 443\n          targetPort: 50443\n      instances:\n        - id: 1\n\naddons:\n  kubespray:\n    ingress_nginx_enabled: true\n    ingress_nginx_namespace: \"ingress-nginx\"\n    ingress_nginx_insecure_port: 50080 # (2)!\n    ingress_nginx_secure_port: 50443\n
  1. By default, each configured port instructs the load balancer to distribute traffic across all worker nodes. The default behavior can be changed using the target property.

    Possible target values are:

    • workers - Distributes traffic across worker nodes. (default)
    • masters - Distributes traffic across master nodes.
    • all - Distributes traffic across master and worker nodes.
  2. When the ingress-nginx controller is set up with Kubespray, a DaemonSet is created that exposes ports on the host (hostPort).

"},{"location":"examples/accessing-cluster/#metallb","title":"MetalLB","text":"

MetalLB is a network load balancer implementation for bare metal Kubernetes clusters. In short, it allows you to create Services of type LoadBalancer where actual on-demand load balancers are not an option.

For MetalLB to work, a pool of unused IP addresses needs to be provided. In the following example, MetalLB is configured to use an IP address pool with the IP range 10.10.13.225/27.

addons:\n  kubespray:\n    metallb_enabled: true\n    metallb_speaker_enabled: true\n    metallb_ip_range:\n      - \"10.10.13.225/27\"\n    metallb_pool_name: \"default\"\n    metallb_auto_assign: true\n    metallb_version: v0.12.1\n    metallb_protocol: \"layer2\"\n

When a Service of type LoadBalancer is created, it is assigned an IP address from the pool. For example, we could deploy an ingress-nginx controller and change its Service type to LoadBalancer.

# Deploy ingress-nginx controller\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/baremetal/1.23/deploy.yaml\n\n# Patch ingress controller Service type to LoadBalancer\nkubectl patch svc ingress-nginx-controller -n ingress-nginx -p '{\"spec\": {\"type\":\"LoadBalancer\"}}'\n

As a result, MetalLB assigns the service ingress-nginx-controller an external IP address from the address pool.

kubectl get svc -n ingress-nginx ingress-nginx-controller\n\n# NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                      AGE\n# ingress-nginx-controller   LoadBalancer   10.233.55.194   10.10.13.225   80:31497/TCP,443:30967/TCP   63s\n

By sending a request to the assigned IP address, it can be seen that Nginx responds to the request.

curl -k https://10.10.13.225\n\n# <html>\n# <head><title>404 Not Found</title></head>\n# <body>\n# <center><h1>404 Not Found</h1></center>\n# <hr><center>nginx</center>\n# </body>\n# </html>\n

This example has demonstrated the functionality of MetalLB in layer2 mode. For more MetalLB configuration options, see the official MetalLB documentation.

"},{"location":"examples/full-example/","title":"Full example","text":"

This document contains an example of Kubitect configuration. Example covers all (or most) of the Kubitect properties. This example is meant for users that learn the fastest from an example configuration.

#\n# The 'hosts' section contains data about the physical servers on which the\n# Kubernetes cluster will be installed.\n#\n# For each host, a name and connection type must be specified. Only one host can\n# have the connection type set to 'local' or 'localhost'.\n#\n# If the host is a remote machine, the path to the SSH key file must be specified.\n# Note that connections to remote hosts support only passwordless certificates.\n#\n# The host can also be marked as default, i.e. if no specific host is specified\n# for an instance (in the cluster.nodes section), it will be installed on a\n# default host. If none of the hosts are marked as default, the first host in the\n# list is used as the default host.\n#\nhosts:\n  - name: localhost # (3)!\n    default: true # (4)!\n    connection:\n      type: local # (5)!\n  - name: remote-server-1\n    connection:\n      type: remote\n      user: myuser # (6)!\n      ip: 10.10.40.143 # (7)!\n      ssh:\n        port: 1234  # (8)!\n        verify: true # (9)!\n        keyfile: \"~/.ssh/id_rsa_server1\" # (10)!\n  - name: remote-server-2\n    connection:\n      type: remote\n      user: myuser\n      ip: 10.10.40.144\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_server2\"\n    mainResourcePoolPath: \"/var/lib/libvirt/pools/\" # (11)!\n    dataResourcePools: # (12)!\n      - name: data-pool # (13)!\n        path: \"/mnt/data/pool\" # (14)!\n      - name: backup-pool\n        path: \"/mnt/backup/pool\"\n\n#\n# The 'cluster' section of the configuration contains general data about the\n# cluster, the nodes that are part of the cluster, and the cluster's network.\n#\ncluster:\n  name: my-k8s-cluster # (15)!\n  network:\n    mode: bridge # (16)!\n    cidr: 10.10.64.0/24 # (17)!\n    gateway: 10.10.64.1 # (18)!\n    bridge: br0 # (19)!\n  nodeTemplate:\n    user: k8s\n    ssh:\n      privateKeyPath: \"~/.ssh/id_rsa_test\"\n      addToKnownHosts: true\n    os:\n      distro: ubuntu\n      networkInterface: ens3 # (20)!\n    dns: # (21)!\n      - 1.1.1.1\n      - 1.0.0.1\n    updateOnBoot: true\n  nodes:\n    loadBalancer:\n      vip: 10.10.64.200 # (22)!\n      virtualRouterId: 13 # (23)!\n      forwardPorts:\n        - name: http\n          port: 80\n        - name: https\n          port: 443\n          target: all\n        - name: sample\n          port: 60000\n          targetPort: 35000\n      default: # (24)!\n        ram: 4 # GiB\n        cpu: 1 # vCPU\n        mainDiskSize: 16 # GiB\n      instances:\n        - id: 1\n          ip: 10.10.64.5 # (25)!\n          mac: \"52:54:00:00:00:40\" # (26)!\n          ram: 8 # (27)!\n          cpu: 8 # (28)!\n          host: remote-server-1 # (29)!\n        - id: 2\n          ip: 10.10.64.6\n          mac: \"52:54:00:00:00:41\"\n          host: remote-server-2\n        - id: 3\n          ip: 10.10.64.7\n          mac: \"52:54:00:00:00:42\"\n          # If host is not specifed, VM will be installed on the default host.\n          # If default host is not specified, VM will be installed on the first\n          # host in the list.\n    master:\n      default:\n        ram: 8\n        cpu: 2\n        mainDiskSize: 256\n      instances:\n          # IMPORTANT: There should be odd number of master nodes.\n        - id: 1\n          host: remote-server-1\n        - id: 2\n          host: remote-server-2\n        - id: 3\n          host: localhost\n    worker:\n      default:\n        ram: 16\n        cpu: 4\n        labels: # (30)!\n          custom-label: \"This is a custom default node label\"\n          node-role.kubernetes.io/node: # (31)!\n      instances:\n        - id: 1\n          ip: 10.10.64.101\n          cpu: 8\n          ram: 64\n          host: remote-server-1\n        - id: 2\n          ip: 10.10.64.102\n          dataDisks: # (32)!\n            - name: rook-disk # (33)!\n              pool: data-pool # (34)!\n              size: 128 # GiB\n            - name: test-disk\n              pool: data-pool\n              size: 128\n        - id: 3\n          ip: 10.10.64.103\n          ram: 64\n          labels:\n            custom-label: \"Overwrite default node label\" # (35)!\n            instance-label: \"Node label, only for this instance\"\n        - id: 4\n          host: remote-server-2\n        - id: 5\n\n#\n# The 'kubernetes' section contains Kubernetes related properties,\n# such as version and network plugin.\n#\nkubernetes:\n  version: v1.27.5\n  networkPlugin: calico\n  dnsMode: coredns # (36)!\n  other:\n    copyKubeconfig: false\n\n#\n# The 'addons' section contains the configuration of the applications that\n# will be installed on the Kubernetes cluster as part of the cluster setup.\n#\naddons:\n  kubespray:\n    # Sample Nginx ingress controller deployment\n    ingress_nginx_enabled: true\n    ingress_nginx_namespace: \"ingress-nginx\"\n    ingress_nginx_insecure_port: 80\n    ingress_nginx_secure_port: 443\n    # Sample MetalLB deployment\n    metallb_enabled: true\n    metallb_speaker_enabled: true\n    metallb_ip_range:\n      - \"10.10.9.201-10.10.9.254\"\n    metallb_pool_name: \"default\"\n    metallb_auto_assign: true\n    metallb_version: v0.12.1\n    metallb_protocol: \"layer2\"\n
  1. This allows you to set a custom URL that targets clone/fork of Kubitect project.

  2. Kubitect version.

  3. Custom host name. It is used to link instances to the specific host.

  4. Makes the host a default host. This means that if no host is specified for the node instance, the instance will be linked to the default host.

  5. Connection type can be either local or remote.

    If it is set to remote, at least the following fields must be set:

    • user
    • ip
    • ssh.keyfile
  6. Remote host user that is used to connect to the remote hypervisor. This user must be added in the libvirt group.

  7. IP address of the remote host.

  8. Overrides default SSH port (22).

  9. If true, SSH host is verified. This means that the host must be present in the known SSH hosts.

  10. Path to the passwordless SSH key used to connect to the remote host.

  11. The path to the main resource pool defines where the virtual machine disk images are stored. These disks contain the virtual machine operating system, and therefore it is recommended to install them on SSD disks.

  12. List of other data resource pools where virtual disks can be created.

  13. Custom data resource pool name. Must be unique among all data resource pools on a specific host.

  14. Path where data resource pool is created. All data disks linked to that resource pool will be created under this path.

  15. Cluster name used as a prefix for the various components.

  16. Network mode. Possible values are

    • bridge mode uses predefined bridge interface. This mode is mandatory for deployments across multiple hosts.
    • nat mode creates virtual network with IP range defined in network.cidr
    • route
  17. Network CIDR represents the network IP together with the network mask. In nat mode, CIDR is used for the new network. In bridge mode, CIDR represents the current local area network (LAN).

  18. The network gateway IP address. If omitted the first client IP from network CIDR is used as a gateway.

  19. Bridge represents the bridge interface on the hosts. This field is mandatory if the network mode is set to bridge. If the network mode is set to nat, this field can be omitted.

  20. Set custom DNS list for all nodes. If omitted, network gateway is also used as a DNS.

  21. Specify the network interface used by the virtual machine. In general, this option can be omitted.

    If omitted, a network interface from distro preset (/terraform/defaults.yaml) is used.

  22. Virtual (floating) IP shared between load balancers.

  23. Virtual router ID that is set in Keepalived configuration when virtual IP is used. By default it is set to 51. If multiple clusters are created it must be ensured that it is unique for each cluster.

  24. Default values apply for all virtual machines (VMs) of the same type.

  25. Static IP address of the virtual machine. If omitted DHCP lease is requested.

  26. Static MAC address. If omitted MAC address is generated.

  27. Overrides default RAM value for this node.

  28. Overrides default CPU value for this node.

  29. Name of the host where instance should be created. If omitted the default host is used.

  30. Default worker node labels.

  31. Label sets worker nodes role to node.

  32. Overrides default data disks for this node.

  33. Custom data disk name. It must be unique among all data disks for a specific instance.

  34. Resource pool name that must be defined on the host on which the instance will be deployed.

  35. Node labels defined for specific instances take precedence over default labels with the same key, so this label overrides the default label.

  36. Currently, the only DNS mode supported is CoreDNS.

"},{"location":"examples/full-example/#full-detailed-example","title":"Full (detailed) example","text":""},{"location":"examples/ha-cluster/","title":"Highly available (HA) cluster","text":"

This example demonstrates how to use Kubitect to create a highly available Kubernetes cluster that spans across five hosts. This topology offers redundancy in case of node or host failures.

The final topology of the deployed Kubernetes cluster is shown in the figure below.

"},{"location":"examples/ha-cluster/#highly-available-cluster","title":"Highly available cluster","text":""},{"location":"examples/ha-cluster/#step-1-hosts-configuration","title":"Step 1: Hosts configuration","text":"

This example involves the deployment of a Kubernetes cluster on five remote physical hosts. The local network subnet used in this setup is 10.10.0.0/20, with the gateway IP address set to 10.10.0.1. All hosts are connected to the same local network and feature a pre-configured bridge interface, named br0.

Tip

This example uses preconfigured bridges on each host to expose nodes on the local network.

Network bridge example shows how to configure a bridge interface using Netplan.

Furthermore, we have configured a user named kubitect on each host, which can be accessed through SSH using the same certificate stored on our local machine without the need for a password. The certificate is located at ~/.ssh/id_rsa_ha.

To deploy the Kubernetes cluster, each host's details must be specified in the Kubitect configuration file. In this case, the host configurations differ only in the host's name and IP address.

ha.yaml
hosts:\n  - name: host1\n    connection:\n      type: remote\n      user: kubitect\n      ip: 10.10.0.5\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_ha\"\n  - name: host2\n    connection:\n      type: remote\n      user: kubitect\n      ip: 10.10.0.6\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_ha\"\n  - name: host3\n    connection:\n      type: remote\n      user: kubitect\n      ip: 10.10.0.10\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_ha\"\n  - name: host4\n    connection:\n      type: remote\n      user: kubitect\n      ip: 10.10.0.11\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_ha\"\n  - name: host5\n    connection:\n      type: remote\n      user: kubitect\n      ip: 10.10.0.12\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_ha\"\n
"},{"location":"examples/ha-cluster/#step-2-network-configuration","title":"Step 2: Network configuration","text":"

In the network configuration section, we specify the bridge interface that is preconfigured on each host and CIDR of our local network.

The code snippet below illustrates the network configuration used in this example:

ha.yaml
cluster:\n  network:\n    mode: bridge\n    cidr: 10.10.0.0/20\n    bridge: br0\n
"},{"location":"examples/ha-cluster/#step-3-load-balancer-configuration","title":"Step 3: Load balancer configuration","text":"

By placing a load balancer in front of the control plane (as shown in the Multi-master cluster example), traffic can be distributed across all control plane nodes.

Placing a load balancer in front of the control plane, as demonstrated in the Multi-master cluster example, enables traffic distribution across all healthy control plane nodes. However, having only one load balancer in the cluster would create a single point of failure, potentially rendering the control plane inaccessible if the load balancer fails.

To prevent this scenario, it is necessary to configure at least two load balancers. One of the load balancers serves as the primary, while the other functions as a failover (backup). The purpose of the failover load balancer is to serve incoming requests using the same virtual (shared) IP address if the primary load balancer fails, as depicted in the figure below.

To achieve failover, a virtual router redundancy protocol (VRRP) is used. In practice, each load balancer has its own IP address, but the primary load balancer also serves requests on the virtual IP address, which is not bound to any network interface.

The primary load balancer sends periodic heartbeats to the backup load balancers to indicate that it is still active. If the backup load balancer does not receive a heartbeat within a specified time period, it assumes that the primary load balancer has failed. The new primary load balancer is then elected based on the available load balancers' priorities. Once the new primary load balancer is selected, it starts serving requests on the same virtual IP address as the previous primary load balancer.

The following code snippet shows the configuration of two load balancers and virtual IP for their failover. The load balancers are also configured to be deployed on different hosts for additional redundancy.

ha.yaml
cluster:\n  nodes:\n    loadBalancer:\n      vip: 10.10.13.200\n      instances:\n        - id: 1\n          ip: 10.10.13.201\n          host: host1\n        - id: 2\n          ip: 10.10.13.202\n          host: host2\n
"},{"location":"examples/ha-cluster/#step-4-nodes-configuration","title":"Step 4: Nodes configuration","text":"

The configuration of the nodes is straightforward and similar to the load balancer instance configuration. Each node instance is configured with an ID, an IP address, and a host affinity.

ha.yaml
cluster:\n  nodes:\n    master:\n      instances:\n        - id: 1\n          ip: 10.10.13.10\n          host: host3\n        - id: 2\n          ip: 10.10.13.11\n          host: host4\n        - id: 3\n          ip: 10.10.13.12\n          host: host5\n    worker:\n      instances:\n        - id: 1\n          ip: 10.10.13.20\n          host: host3\n        - id: 2\n          ip: 10.10.13.21\n          host: host4\n        - id: 3\n          ip: 10.10.13.22\n          host: host5\n
"},{"location":"examples/ha-cluster/#step-41-optional-data-disks-configuration","title":"Step 4.1 (Optional): Data disks configuration","text":"

Kubitect automatically creates a main (system) disk for each configured node. Main disk contains the operating system and installed Kubernetes components.

Additional disks, also known as data disks, can be created to expand the node's storage capacity. This feature is particularly useful when using storage solutions like Rook, which can utilize empty disks to provide reliable distributed storage.

Data disks in Kubitect must be configured separately for each node instance. They must also be connected to a resource pool, which can be either a main resource pool or a custom data resource pool. In this example, we have defined a custom data resource pool named data-pool on each host running worker nodes.

Configuring data disks in Kubitect requires a separate configuration for each node instance, with each disk connected to a resource pool. The resource pool can be either a main resource pool or a custom data resource pool. In this example, we have defined a custom data resource pool named data-pool on each host that runs worker nodes.

ha.yaml
hosts:\n  - name: host3\n    ...\n    dataResourcePools:\n      - name: data-pool\n        path: /mnt/libvirt/pools/\n\ncluster:\n  nodes:\n    worker:\n      - id: 1\n        ...\n        host: host3\n        dataDisks:\n          - name: rook\n            pool: data-pool\n            size: 512 # GiB\n
Final cluster configuration ha.yaml
hosts:\n  - name: host1\n    connection:\n      type: remote\n      user: kubitect\n      ip: 10.10.0.5\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_ha\"\n  - name: host2\n    connection:\n      type: remote\n      user: kubitect\n      ip: 10.10.0.6\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_ha\"\n  - name: host3\n    connection:\n      type: remote\n      user: kubitect\n      ip: 10.10.0.10\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_ha\"\n    dataResourcePools:\n      - name: data-pool\n        path: /mnt/libvirt/pools/\n  - name: host4\n    connection:\n      type: remote\n      user: kubitect\n      ip: 10.10.0.11\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_ha\"\n    dataResourcePools:\n      - name: data-pool\n        path: /mnt/libvirt/pools/\n  - name: host5\n    connection:\n      type: remote\n      user: kubitect\n      ip: 10.10.0.12\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_ha\"\n    dataResourcePools:\n      - name: data-pool\n        path: /mnt/libvirt/pools/\n\ncluster:\n  name: kubitect-ha\n  network:\n    mode: bridge\n    cidr: 10.10.0.0/20\n    bridge: br0\n  nodeTemplate:\n    user: k8s\n    updateOnBoot: true\n    ssh:\n      addToKnownHosts: true\n    os:\n      distro: ubuntu\n  nodes:\n    loadBalancer:\n      vip: 10.10.13.200\n      instances:\n        - id: 1\n          ip: 10.10.13.201\n          host: host1\n        - id: 2\n          ip: 10.10.13.202\n          host: host2\n    master:\n      instances:\n        - id: 1\n          ip: 10.10.13.10\n          host: host3\n        - id: 2\n          ip: 10.10.13.11\n          host: host4\n        - id: 3\n          ip: 10.10.13.12\n          host: host5\n    worker:\n      instances:\n        - id: 1\n          ip: 10.10.13.20\n          host: host3\n          dataDisks:\n            - name: rook\n              pool: data-pool\n              size: 512\n        - id: 2\n          ip: 10.10.13.21\n          host: host4\n          dataDisks:\n            - name: rook\n              pool: data-pool\n              size: 512\n        - id: 3\n          ip: 10.10.13.22\n          host: host5\n          dataDisks:\n            - name: rook\n              pool: data-pool\n              size: 512\n\nkubernetes:\n  version: v1.27.5\n
"},{"location":"examples/ha-cluster/#step-5-applying-the-configuration","title":"Step 5: Applying the configuration","text":"

To deploy a cluster, apply the configuration file:

kubitect apply --config ha.yaml\n
"},{"location":"examples/multi-master-cluster/","title":"Multi-master cluster","text":"

This example demonstrates how to use Kubitect to set up a Kubernetes cluster with 3 master and 3 worker nodes.

By configuring multiple master nodes, the control plane remains to operate normally even if some master nodes fail. Since Kubitect deploys clusters with a stacked control plane, the redundancy is ensured as long as there are at least (n/2)+1 master nodes available.

The final topology of the deployed Kubernetes cluster is depicted in the figure below.

Note

This example skips the explanation of some common configurations such as hosts, network, and node template, as they are already covered in detail in the Getting started (step-by-step) guide.

Preset available

To export the preset configuration, run: kubitect export preset example-multi-master

"},{"location":"examples/multi-master-cluster/#multi-master-cluster","title":"Multi-master cluster","text":""},{"location":"examples/multi-master-cluster/#step-1-cluster-configuration","title":"Step 1: Cluster configuration","text":"

When deploying a multiple master Kubernetes cluster using Kubitect, it is necessary to configure at least one load balancer. The load balancer is responsible for distributing traffic evenly across the control plane nodes. In the event of a particular master node failure, the load balancer automatically detects the unhealthy node and routes traffic only to the remaining healthy nodes, ensuring the continuous availability of the Kubernetes cluster.

The figure below provides a visual representation of this approach.

To create such a cluster, all we need to do is specify the desired node instances and configure one load balancer. The control plane will be accessible through the load balancer's IP address.

multi-master.yaml
cluster:\n  ...\n  nodes:\n    loadBalancer:\n      instances:\n        - id: 1\n          ip: 192.168.113.100\n    master:\n      instances: # (1)!\n        - id: 1\n          ip: 192.168.113.10\n        - id: 2\n          ip: 192.168.113.11\n        - id: 3\n          ip: 192.168.113.12\n    worker:\n      instances:\n        - id: 1\n          ip: 192.168.113.20\n        - id: 2\n          ip: 192.168.113.21\n        - id: 3\n          ip: 192.168.113.22\n
  1. Size of the control plane (number of master nodes) must be odd.

Kubitect automatically detects the load balancer instance in the configuration file and installs the HAProxy load balancer on an additional virtual machine. The load balancer is then configured to distribute traffic received on port 6443, which is the Kubernetes API server port, to all control plane nodes.

Final cluster configuration multi-master.yaml
hosts:\n  - name: localhost\n    connection:\n      type: local\n\ncluster:\n  name: k8s-cluster\n  network:\n    mode: nat\n    cidr: 192.168.113.0/24\n  nodeTemplate:\n    user: k8s\n    updateOnBoot: true\n    ssh:\n      addToKnownHosts: true\n    os:\n      distro: ubuntu\n  nodes:\n    loadBalancer:\n      instances:\n        - id: 1\n          ip: 192.168.113.100\n    master:\n      instances:\n        - id: 1\n          ip: 192.168.113.10\n        - id: 2\n          ip: 192.168.113.11\n        - id: 3\n          ip: 192.168.113.12\n    worker:\n      instances:\n        - id: 1\n          ip: 192.168.113.20\n        - id: 2\n          ip: 192.168.113.21\n        - id: 3\n          ip: 192.168.113.22\n\nkubernetes:\n  version: v1.27.5\n  networkPlugin: calico\n
"},{"location":"examples/multi-master-cluster/#step-2-applying-the-configuration","title":"Step 2: Applying the configuration","text":"

To deploy a cluster, apply the configuration file:

kubitect apply --config multi-master.yaml\n
"},{"location":"examples/multi-worker-cluster/","title":"Multi-worker cluster","text":"

This example demonstrates how to use Kubitect to set up a Kubernetes cluster consisting of one master and three worker nodes. The final topology of the deployed Kubernetes cluster is shown in the figure below.

Note

This example skips the explanation of some common configurations such as hosts, network, and node template, as they are already covered in detail in the Getting started (step-by-step) guide.

Preset available

To export the preset configuration, run: kubitect export preset example-multi-worker

"},{"location":"examples/multi-worker-cluster/#multi-worker-cluster","title":"Multi-worker cluster","text":""},{"location":"examples/multi-worker-cluster/#step-1-cluster-configuration","title":"Step 1: Cluster configuration","text":"

You can easily create a cluster with multiple worker nodes by specifying them in the configuration file. For this example, we have included three worker nodes, but you can add as many as you like to suit your needs.

multi-worker.yaml
cluster:\n  ...\n  nodes:\n    master:\n      instances:\n        - id: 1\n          ip: 192.168.113.10 # (1)!\n    worker:\n      instances:\n        - id: 1\n          ip: 192.168.113.21\n        - id: 7\n          ip: 192.168.113.27\n        - id: 99\n
  1. Static IP address of the node. If the ip property is omitted, the DHCP lease is requested when the cluster is created.
Final cluster configuration multi-worker.yaml
hosts:\n  - name: localhost\n    connection:\n      type: local\n\ncluster:\n  name: k8s-cluster\n  network:\n    mode: nat\n    cidr: 192.168.113.0/24\n  nodeTemplate:\n    user: k8s\n    updateOnBoot: true\n    ssh:\n      addToKnownHosts: true\n    os:\n      distro: ubuntu\n  nodes:\n    master:\n      instances:\n        - id: 1\n          ip: 192.168.113.10\n    worker:\n      instances:\n        - id: 1\n          ip: 192.168.113.21\n        - id: 7\n          ip: 192.168.113.27\n        - id: 99\n\nkubernetes:\n  version: v1.27.5\n  networkPlugin: calico\n
"},{"location":"examples/multi-worker-cluster/#step-2-applying-the-configuration","title":"Step 2: Applying the configuration","text":"

To deploy a cluster, apply the configuration file:

kubitect apply --config multi-worker.yaml\n
"},{"location":"examples/network-bridge/","title":"Network bridge","text":"

Bridged networks allow virtual machines to connect directly to the LAN. To use Kubitect with bridged network mode, a bridge interface must be preconfigured on the host machine. This example shows how to configure a simple bridge interface using Netplan.

"},{"location":"examples/network-bridge/#network-bridge","title":"Network bridge","text":""},{"location":"examples/network-bridge/#step-1-preconfigure-the-bridge-on-the-host","title":"Step 1 - (Pre)configure the bridge on the host","text":"

Before the network bridge can be created, a name of the host's network interface is required. This interface will be used by the bridge.

To print the available network interfaces of the host, use the following command.

nmcli device | grep ethernet\n

Similarly to the previous command, network interfaces can be printed using ifconfig or ip commands. Note that these commands output all interfaces, including virtual ones.

ifconfig -a\n# or\nip a\n

Once you have obtained the name of the host's network interface (in our case eth0), you can create a bridge interface (in our case br0) by creating a file with the following content: /etc/netplan/bridge0.yaml

network:\n  version: 2\n  renderer: networkd\n  ethernets:\n    eth0: {} # (1)!\n  bridges:\n    br0: # (2)!\n      interfaces:\n        - eth0\n      dhcp4: true\n      dhcp6: false\n      addresses: # (3)!\n        - 10.10.0.17\n

  1. Existing host's ethernet interface to be enslaved.

  2. Custom name of the bridge interface.

  3. Optionally a static IP address can be set for the bridge interface.

Tip

See the official Netplan configuration examples for more advance configurations.

Validate if the configuration is correctly parsed by Netplan.

sudo netplan generate\n

Apply the configuration.

sudo netplan apply\n

"},{"location":"examples/network-bridge/#step-2-disable-netfilter-on-the-host","title":"Step 2 - Disable netfilter on the host","text":"

The final step is to prevent packets traversing the bridge from being sent to iptables for processing.

 cat >> /etc/sysctl.conf <<EOF\n net.bridge.bridge-nf-call-ip6tables = 0\n net.bridge.bridge-nf-call-iptables = 0\n net.bridge.bridge-nf-call-arptables = 0\n EOF\n\n sysctl -p /etc/sysctl.conf\n

Tip

For more information, see the libvirt documentation.

"},{"location":"examples/network-bridge/#step-3-set-up-a-cluster-over-bridged-network","title":"Step 3 - Set up a cluster over bridged network","text":"

In the cluster configuration file, set the following variables:

  • cluster.network.mode to bridge,
  • cluster.network.cidr to the network CIDR of the LAN and
  • cluster.network.bridge to the name of the bridge you have created (br0 in our case)
cluster:\n  network:\n    mode: bridge\n    cidr: 10.10.13.0/24\n    bridge: br0\n...\n
"},{"location":"examples/rook-cluster/","title":"Rook cluster","text":"

This example shows how to use Kubitect to set up distributed storage with Rook. For distributed storage, we add an additional data disk to each virtual machine as shown on the figure below.

This example demonstrates how to set up distributed storage with Rook. To achieve distributed storage, we add an additional data disk to each virtual machine, as depicted in the figure below. This additional data disk is utilized by Rook to provide reliable and scalable distributed storage solutions for the Kubernetes cluster.

"},{"location":"examples/rook-cluster/#rook-cluster","title":"Rook cluster","text":""},{"location":"examples/rook-cluster/#basic-setup","title":"Basic setup","text":""},{"location":"examples/rook-cluster/#step-1-define-data-resource-pool","title":"Step 1: Define data resource pool","text":"

To configure distributed storage with Rook, the data disks must be attached to the virtual machines. By default, each data disk is created in the main resource pool. However, it is also possible to configure additional resource pools and associate data disks with them later, depending on your requirements.

In this example, we define an additional resource pool named rook-pool. rook-sample.yaml

hosts:\n  - name: localhost\n    connection:\n      type: local\n    dataResourcePools:\n      - name: rook-pool\n

"},{"location":"examples/rook-cluster/#step-2-attach-data-disks","title":"Step 2: Attach data disks","text":"

After the data resource pool is configured, we are ready to allocate some data disks to the virtual machines.

rook-sample.yaml
cluster:\n  nodes:\n    worker:\n      instances:\n        - id: 1\n          dataDisks:\n            - name: rook\n              pool: rook-pool # (1)!\n              size: 256\n        - id: 2\n          dataDisks:\n            - name: rook\n              pool: rook-pool\n              size: 256\n        - id: 3\n        - id: 4\n          dataDisks:\n            - name: rook\n              pool: rook-pool\n              size: 256\n            - name: test\n              pool: rook-pool\n              size: 32\n
  1. To create data disks in the main resource pool, either omit the pool property or set its value to main.
"},{"location":"examples/rook-cluster/#step-3-enable-rook-addon","title":"Step 3: Enable Rook addon","text":"

After configuring the disks and attaching them to the virtual machines, activating the Rook add-on is all that is required to utilize the distributed storage solution.

rook-sample.yaml
addons:\n  rook:\n    enabled: true\n

By default, Rook resources are provisioned on all worker nodes in the Kubernetes cluster, without any constraints. However, this behavior can be restricted using node selectors, which are explained later in the guide.

Final cluster configuration rook-sample.yaml
hosts:\n  - name: localhost\n    connection:\n      type: local\n    dataResourcePools:\n      - name: rook-pool\n\ncluster:\n  name: rook-cluster\n  network:\n    mode: nat\n    cidr: 192.168.113.0/24\n  nodeTemplate:\n    user: k8s\n    updateOnBoot: true\n    ssh:\n      addToKnownHosts: true\n    os:\n      distro: ubuntu\n  nodes:\n    master:\n      instances:\n        - id: 1\n    worker:\n      instances:\n        - id: 1\n          dataDisks:\n            - name: rook\n              pool: rook-pool\n              size: 256\n        - id: 2\n          dataDisks:\n            - name: rook\n              pool: rook-pool\n              size: 256\n        - id: 3\n        - id: 4\n          dataDisks:\n            - name: rook\n              pool: rook-pool\n              size: 256\n            - name: test\n              pool: rook-pool\n              size: 32\n\nkubernetes:\n  version: v1.27.5\n\naddons:\n  rook:\n    enabled: true\n
"},{"location":"examples/rook-cluster/#step-4-apply-the-configuration","title":"Step 4: Apply the configuration","text":"
kubitect apply --config rook-sample.yaml\n
"},{"location":"examples/rook-cluster/#node-selector","title":"Node selector","text":"

The node selector is a dictionary of labels and their potential values. The node selector restricts on which nodes Rook can be deployed, by selecting only those nodes that match all the specified labels.

"},{"location":"examples/rook-cluster/#step-1-set-node-labels","title":"Step 1: Set node labels","text":"

To use the node selector effectively, you should give your nodes custom labels.

In this example, we label all worker nodes with the label rook. To ensure that scaling the cluster does not subsequently affect Rook, we set label's value to false by default. Only the nodes where Rook should be deployed are labeled rook: true, as shown in the figure below.

The following configuration snippet shows how to set a default label and override it for a particular instance.

rook-sample.yaml
cluster:\n  nodes:\n    worker:\n      default:\n        labels:\n          rook: false\n      instances:\n        - id: 1\n          labels:\n            rook: true # (1)!\n        - id: 2\n          labels:\n            rook: true\n        - id: 3\n          labels:\n            rook: true\n        - id: 4\n
  1. By default, the label rook: false is set for all worker nodes. Setting the label rook: true for this particular instance overrides the default label.
"},{"location":"examples/rook-cluster/#step-2-configure-a-node-selector","title":"Step 2: Configure a node selector","text":"

So far we have labeled all worker nodes, but labeling is not enough to prevent Rook from being deployed on all worker nodes. To restrict on which nodes Rook resources can be deployed, we need to configure a node selector.

We want to deploy Rook on the nodes labeled with the label rook: true, as shown in the figure below.

The following configuration snippet shows how to configure the node selector mentioned above.

rook-sample.yaml
addons:\n  rook:\n    enabled: true\n    nodeSelector:\n      rook: true\n
Final cluster configuration rook-sample.yaml
hosts:\n  - name: localhost\n    connection:\n      type: local\n    dataResourcePools:\n      - name: rook-pool\n\ncluster:\n  name: rook-cluster\n  network:\n    mode: nat\n    cidr: 192.168.113.0/24\n  nodeTemplate:\n    user: k8s\n    updateOnBoot: true\n    ssh:\n      addToKnownHosts: true\n    os:\n      distro: ubuntu\n  nodes:\n    master:\n      instances:\n        - id: 1\n    worker:\n      default:\n        labels:\n          rook: false\n      instances:\n        - id: 1\n          labels:\n            rook: true\n          dataDisks:\n            - name: rook\n              pool: rook-pool\n              size: 256\n        - id: 2\n          labels:\n            rook: true\n          dataDisks:\n            - name: rook\n              pool: rook-pool\n              size: 256\n        - id: 3\n          labels:\n            rook: true\n        - id: 4\n          dataDisks:\n            - name: rook\n              pool: rook-pool\n              size: 256\n            - name: test\n              pool: rook-pool\n              size: 32\n\nkubernetes:\n  version: v1.27.5\n\naddons:\n  rook:\n    enabled: true\n    nodeSelector:\n      rook: true\n
"},{"location":"examples/rook-cluster/#step-3-apply-the-configuration","title":"Step 3: Apply the configuration","text":"

To deploy a cluster, apply the configuration file:

kubitect apply --config rook-sample.yaml\n
"},{"location":"examples/single-node-cluster/","title":"Single node cluster","text":"

This example demonstrates how to set up a single-node Kubernetes cluster using Kubitect. In a single-node cluster, only one master node needs to be configured. The topology of the Kubernetes cluster deployed in this guide is shown below.

Note

This example skips the explanation of some common configurations such as hosts, network, and node template, as they are already covered in detail in the Getting started (step-by-step) guide.

Preset available

To export the preset configuration, run: kubitect export preset example-single-node

"},{"location":"examples/single-node-cluster/#single-node-cluster","title":"Single node cluster","text":""},{"location":"examples/single-node-cluster/#step-1-create-the-configuration","title":"Step 1: Create the configuration","text":"

To initialize a single-node Kubernetes cluster, you need to specify a single master node in the cluster configuration file.

single-node.yaml
cluster:\n  ...\n  nodes:\n    master:\n      instances:\n        - id: 1\n          ip: 192.168.113.10 # (1)!\n
  1. Static IP address of the node. If the ip property is omitted, the DHCP lease is requested when the cluster is created.

When no worker nodes are specified, master nodes are labeled as schedulable, which makes them behave as both master and worker nodes. This means that the single master node in the cluster will perform both the control plane functions of a Kubernetes master node and the data plane functions of a worker node.

Final cluster configuration single-node.yaml
hosts:\n  - name: localhost\n    connection:\n      type: local\n\ncluster:\n  name: k8s-cluster\n  network:\n    mode: nat\n    cidr: 192.168.113.0/24\n  nodeTemplate:\n    user: k8s\n    updateOnBoot: true\n    ssh:\n      addToKnownHosts: true\n    os:\n      distro: ubuntu\n  nodes:\n    master:\n      default:\n        ram: 4\n        cpu: 2\n        mainDiskSize: 32\n      instances:\n        - id: 1\n          ip: 192.168.113.10\n\nkubernetes:\n  version: v1.27.5\n  networkPlugin: calico\n
"},{"location":"examples/single-node-cluster/#step-2-applying-the-configuration","title":"Step 2: Applying the configuration","text":"

To deploy a cluster, apply the configuration file:

kubitect apply --config single-node.yaml\n
"},{"location":"getting-started/getting-started/","title":"Getting started (step-by-step)","text":"

In the quick start guide, we learned how to create a Kubernetes cluster using a preset configuration. Now, we will explore how to create a customized cluster topology that meets your specific requirements.

This step-by-step guide will walk you through the process of creating a custom cluster configuration file from scratch and using it to create a functional Kubernetes cluster with one master and one worker node. By following the steps outlined in this guide, you will have a Kubernetes cluster up and running in no time.

"},{"location":"getting-started/getting-started/#getting-started","title":"Getting Started","text":""},{"location":"getting-started/getting-started/#step-1-ensure-all-requirements-are-met","title":"Step 1 - Ensure all requirements are met","text":"

Before progressing with this guide, take a minute to ensure that all of the requirements are met. Afterwards, simply create a new YAML file and open it in a text editor of your choice.

"},{"location":"getting-started/getting-started/#step-2-prepare-hosts-configuration","title":"Step 2 - Prepare hosts configuration","text":"

In the cluster configuration file, the first step is to define hosts. Hosts represent target servers that can be either local or remote machines.

LocalhostRemote host

When setting up the cluster on your local host, where the command line tool is installed, be sure to specify a host with a connection type set to local.

kubitect.yaml
hosts:\n  - name: localhost # (1)!\n    connection:\n      type: local\n
  1. Custom unique name of the host.

In case the cluster is deployed on a remote host, you will be required to provide the IP address of the remote machine along with the SSH credentials.

kubitect.yaml
hosts:\n  - name: my-remote-host\n    connection:\n      type: remote\n      user: myuser\n      ip: 10.10.40.143 # (1)!\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_server1\" # (2)!\n
  1. IP address of the remote host.

  2. Path to the password-less SSH key file required for establishing connection with the remote host.

Throughout this guide, only localhost will be used.

"},{"location":"getting-started/getting-started/#step-3-define-cluster-infrastructure","title":"Step 3 - Define cluster infrastructure","text":"

The second part of the configuration file consists of the cluster infrastructure. In this part, all cluster nodes are defined along with their properties such as operating system, CPU cores, amount of RAM and so on.

Below is an image that visualizes the components of the final cluster.

Let's shift our attention to the following configuration:

kubitect.yaml
cluster:\n  name: k8s-cluster\n  network:\n    ...\n  nodeTemplate:\n    ...\n  nodes:\n    ...\n

As we can see, the cluster infrastructure section consists of the cluster name and three subsections:

  • cluster.name

    The cluster name is used as a prefix for each resource created by Kubitect. It's an essential property that helps identify and manage resources created by Kubitect.

  • cluster.network

    The network subsection holds information about the network properties of the cluster. It defines the IP address range, the mode of networking, and other network-specific properties that apply to the entire cluster.

  • cluster.nodeTemplate

    The node template subsection contains properties that apply to all nodes in the cluster, such as the operating system, SSH user, and SSH private key.

  • cluster.nodes

    The nodes subsection defines each node in our cluster. This subsection includes information such as the node name, node type, and other node-specific properties.

Now that we have a general idea of the cluster infrastructure configuration, let's examine each of these subsections in more detail to understand how to define them properly and configure a Kubernetes cluster using Kubitect.

"},{"location":"getting-started/getting-started/#step-31-cluster-network","title":"Step 3.1 - Cluster network","text":"

In the network subsection of the Kubernetes configuration file, we need to define the network that our cluster will use. Currently, there are two supported network modes - NAT or bridge.

The nat network mode creates a virtual network that performs network address translation. This mode allows the use of IP address ranges that do not exist within our local area network (LAN).

On the other hand, the bridge network mode uses a predefined bridge interface, allowing virtual machines to connect directly to the LAN. This mode is mandatory when the cluster spreads over multiple hosts.

For the sake of simplicity, this tutorial will use the NAT mode as it does not require a preconfigured bridge interface.

kubitect.yaml
cluster:\n  ...\n  network:\n    mode: nat\n    cidr: 192.168.113.0/24\n

The above configuration will instruct Kubitect to create a virtual network that uses 192.168.113.0/24 IP range.

"},{"location":"getting-started/getting-started/#step-32-node-template","title":"Step 3.2 - Node template","text":"

The nodeTemplate subsection allows you to define general properties for all nodes in the cluster. While there are no required fields, there are several useful properties you may want to include.

  • user

    This property specifies the name of the user that will be created on all virtual machines and used for SSH. (default: k8s)

  • os.distro

    This property defines the operating system for the nodes. By default, the nodes use the latest Ubuntu 22.04 release. To explore other available distributions, please refer to the OS Distribution section in the node template of our user guide.

  • ssh.addToKnownHosts

    When this property is set to true, all nodes will be added to SSH known hosts. If you later destroy the cluster, these nodes will also be removed from the known hosts.

  • updateOnBoot

    This property determines whether virtual machines are updated at first boot

To illustrate, let's set these nodeTemplate properties in our configuration file:

kubitect.yaml
cluster:\n  ...\n  nodeTemplate:\n    user: k8s\n    updateOnBoot: true\n    ssh:\n      addToKnownHosts: true\n    os:\n      distro: ubuntu22\n
"},{"location":"getting-started/getting-started/#step-33-cluster-nodes","title":"Step 3.3 - Cluster nodes","text":"

In the nodes subsection, we define all nodes that will form the cluster. Each node can be defined as one of the following three types:

  • worker

    A worker node runs the applications and workloads that are deployed in the cluster. It communicates with the master node to receive instructions on how to schedule and run the containers.

  • master

    Master nodes are responsible for managing and coordinating the worker nodes in the cluster. Therefore, each cluster must contain at least one master node.

    Since etcd key-value datastore is also present on these nodes, the number of master nodes must be odd. For more information, see etcd FAQ.

  • loadBalancer

    These nodes server as internal load balancers that expose the Kubernetes control plane at a single endpoint. They are essential when more then one master node is configured in the cluster.

This guide is focused on deploying a Kubernetes cluster with only one master node, which eliminates the need for internal load balancers. However, if you are interested in creating a multi-master or high-availability (HA) cluster, please refer to the corresponding examples.

To better understand this part, let's take a look at an example configuration:

kubitect.yaml
cluster:\n  ...\n  nodes:\n    master:\n      default: # (1)!\n        ram: 4 # (2)!\n        cpu: 2 # (3)!\n        mainDiskSize: 32 # (4)!\n      instances: # (5)!\n        - id: 1 # (6)!\n          ip: 192.168.113.10 # (7)!\n    worker:\n      default:\n        ram: 8\n        cpu: 2\n        mainDiskSize: 32\n      instances:\n        - id: 1\n          ip: 192.168.113.21\n          ram: 4 # (8)!\n
  1. Default properties are applied to all nodes of the same type, which in this case are the master nodes. They are particularly useful to quickly configure multiple nodes of the same type.

  2. The amount of RAM allocated to the master nodes (in GiB).

  3. The number of virtual CPUs assigned to each master node.

  4. The size of the virtual disk attached to each master node (in GiB).

  5. A list of master node instances.

  6. The instance ID is the only required field that must be specified for each instance.

  7. A static IP address set for this particular instance. If the ip property is omitted, the node requests a DHCP lease during creation.

  8. In this example, the amount of RAM allocated to the worker node instance is set to 4 GiB, which overwrites the default value of 8 GiB.

"},{"location":"getting-started/getting-started/#step-34-kubernetes-properties","title":"Step 3.4 - Kubernetes properties","text":"

The final section of the cluster configuration contains the Kubernetes properties, such as the version and network plugin.

kubitect.yaml
kubernetes:\n  version: v1.27.5\n  networkPlugin: calico\n
"},{"location":"getting-started/getting-started/#step-4-create-the-cluster","title":"Step 4 - Create the cluster","text":"

Below is the final configuration for our Kubernetes cluster:

Final cluster configuration kubitect.yaml
hosts:\n  - name: localhost\n    connection:\n      type: local\n\ncluster:\n  name: k8s-cluster\n  network:\n    mode: nat\n    cidr: 192.168.113.0/24\n  nodeTemplate:\n    user: k8s\n    updateOnBoot: true\n    ssh:\n      addToKnownHosts: true\n    os:\n      distro: ubuntu22\n  nodes:\n    master:\n      default:\n        ram: 4\n        cpu: 2\n        mainDiskSize: 32\n      instances:\n        - id: 1\n          ip: 192.168.113.10\n    worker:\n      default:\n        ram: 8\n        cpu: 2\n        mainDiskSize: 32\n      instances:\n        - id: 1\n          ip: 192.168.113.21\n          ram: 4\n\nkubernetes:\n  version: v1.27.5\n  networkPlugin: calico\n

To create the cluster, apply the configuration file to Kubitect:

kubitect apply --config kubitect.yaml\n

Tip

If you encounter any issues during the installation process, please refer to the troubleshooting page first.

After applying the configuration file to Kubitect, a directory for the created Kubernetes cluster is generated and stored in Kubitect's home directory. The default location for the home directory is ~/.kubitect and has the following structure.

~/.kubitect\n   \u251c\u2500\u2500 clusters\n   \u2502   \u251c\u2500\u2500 k8s-cluster\n   \u2502   \u251c\u2500\u2500 my-cluster\n   \u2502   \u2514\u2500\u2500 ...\n   \u2514\u2500\u2500 share\n       \u251c\u2500\u2500 terraform\n       \u2514\u2500\u2500 venv\n

The clusters directory contains a subdirectory for each Kubernetes cluster that you have created using Kubitect. Each subdirectory is named after the cluster, for example k8s-cluster. The configuration files for each cluster are stored in these directories.

The share directory contains files and directories that are shared between different cluster installations.

All created clusters can be listed at any time using the list subcommand.

kubitect list clusters\n\n# Clusters:\n#   - k8s-cluster (active)\n#   - my-cluster (active)\n

"},{"location":"getting-started/getting-started/#step-5-test-the-cluster","title":"Step 5 - Test the cluster","text":"

Once you have successfully installed a Kubernetes cluster, the Kubeconfig file can be found in the cluster's directory. However, you will most likely want to export the Kubeconfig to a separate file:

kubitect export kubeconfig --cluster k8s-cluster > kubeconfig.yaml\n

This will create a file named kubeconfig.yaml in your current directory. Finally, to confirm that the cluster is ready, you can list its nodes using the kubectl command:

kubectl get nodes --kubeconfig kubeconfig.yaml\n

Congratulations, you have completed the getting started quide.

"},{"location":"getting-started/installation/","title":"Installation","text":""},{"location":"getting-started/installation/#installation","title":"Installation","text":""},{"location":"getting-started/installation/#install-kubitect-cli-tool","title":"Install Kubitect CLI tool","text":"

Download Kubitect binary file from the release page.

curl -o kubitect.tar.gz -L https://dl.kubitect.io/linux/amd64/latest\n

Unpack tar.gz file.

tar -xzf kubitect.tar.gz\n

Install Kubitect command line tool by placing the Kubitect binary file in /usr/local/bin directory.

sudo mv kubitect /usr/local/bin/\n

Note

The download URL is a combination of the operating system type, system architecture and version of Kubitect (https://dl.kubitect.io/<os>/<arch>/<version>).

All releases can be found on GitHub release page.

Verify the installation by checking the Kubitect version.

kubitect --version\n\n# kubitect version v3.3.0\n

"},{"location":"getting-started/installation/#enable-shell-autocomplete","title":"Enable shell autocomplete","text":"

Tip

To list all supported shells, run: kubitect completion -h

For shell specific instructions run: kubitect completion shell -h

BashZsh

This script depends on the bash-completion package. If it is not installed already, you can install it via your OS's package manager.

To load completions in your current shell session:

source <(kubitect completion bash)\n

To load completions for every new session, execute once:

Linux:

kubitect completion bash > /etc/bash_completion.d/kubitect\n

macOS:

kubitect completion bash > $(brew --prefix)/etc/bash_completion.d/kubitect\n

If shell completion is not already enabled in your environment you will need to enable it. You can execute the following once:

echo \"autoload -U compinit; compinit\" >> ~/.zshrc\n

To load completions in your current shell session:

source <(kubitect completion zsh); compdef _kubitect kubitect\n

To load completions for every new session, execute once:

Linux:

kubitect completion zsh > \"${fpath[1]}/_kubitect\"\n

macOS:

kubitect completion zsh > $(brew --prefix)/share/zsh/site-functions/_kubitect\n

"},{"location":"getting-started/quick-start/","title":"Quick start","text":"

In this quick guide, we will show you how to use the Kubitect command line tool to quickly deploy a simple Kubernetes cluster.

To get started, you will need to apply a cluster configuration file to the Kubitect command line tool. You can either prepare this file manually, as explained in our Getting started guide, or use one of the available presets.

For the purposes of this quick start guide, we will be using a getting-started preset, which defines a cluster with one master and one worker node. The resulting infrastructure is shown in the image below.

"},{"location":"getting-started/quick-start/#quick-start","title":"Quick start","text":""},{"location":"getting-started/quick-start/#step-1-create-a-kubernetes-cluster","title":"Step 1 - Create a Kubernetes cluster","text":"

Export the gettings-started preset:

kubitect export preset --name getting-started > cluster.yaml\n

Then, apply the exported configuration file to the Kubitect:

kubitect apply --config cluster.yaml\n

That's it! The cluster, named k8s-cluster, should be up and running in approximately 10 minutes.

"},{"location":"getting-started/quick-start/#step-2-export-kubeconfig","title":"Step 2 - Export kubeconfig","text":"

After successfully installing the Kubernetes cluster, a Kubeconfig file will be created within the cluster's directory. To export the Kubeconfig to a custom file, use the following command:

kubitect export kubeconfig --cluster k8s-cluster > kubeconfig.yaml\n
"},{"location":"getting-started/quick-start/#step-3-test-the-cluster","title":"Step 3 - Test the cluster","text":"

To test that the cluster is up and running, display all cluster nodes using the exported Kubeconfig and the kubectl command:

kubectl get nodes --kubeconfig kubeconfig.yaml\n

Congratulations, you have successfully deployed a Kubernetes cluster using Kubitect!

"},{"location":"getting-started/requirements/","title":"Requirements","text":"

On the local host (where Kubitect command-line tool is installed), the following requirements must be met:

Git

Python >= 3.8

Python virtualenv

Password-less SSH key for each remote host

On hosts where a Kubernetes cluster will be deployed using Kubitect, the following requirements must be met:

A libvirt virtualization API

A running hypervisor that is supported by libvirt (e.g. KVM)

How to install KVM?

To install the KVM (Kernel-based Virtual Machine) hypervisor and libvirt, use apt or yum to install the following packages:

  • qemu-kvm
  • libvirt-clients
  • libvirt-daemon
  • libvirt-daemon-system

After the installation, add your user to the kvm group in order to access the kvm device:

sudo usermod -aG kvm $USER\n
"},{"location":"getting-started/requirements/#requirements","title":"Requirements","text":""},{"location":"getting-started/other/local-development/","title":"Local development","text":"

This document shows how to build a CLI tool manually and how to use the project without creating any files outside the project's directory.

"},{"location":"getting-started/other/local-development/#local-development","title":"Local development","text":""},{"location":"getting-started/other/local-development/#prerequisites","title":"Prerequisites","text":"
  • Git
  • Go 1.18 or greater
"},{"location":"getting-started/other/local-development/#step-1-clone-the-project","title":"Step 1: Clone the project","text":"

First, clone the project.

git clone https://github.com/MusicDin/kubitect\n

Afterwards, move into the cloned project.

cd kubitect\n

"},{"location":"getting-started/other/local-development/#step-2-build-kubitect-cli-tool","title":"Step 2: Build Kubitect CLI tool","text":"

The Kubitect CLI tool can be manually built using Go. Running the following command will produce a kubitect binary file.

go build .\n

To make the binary file globally accessible, move it to the /usr/local/bin/ directory.

sudo mv kubitect /usr/local/bin/kubitect\n

"},{"location":"getting-started/other/local-development/#step-3-local-development","title":"Step 3: Local development","text":"

By default, Kubitect creates and manages clusters in the Kubitect's home directory (~/.kubitect). However, for development purposes, it is often more convenient to have all resources created in the current directory.

If you want to create a new cluster in the current directory, you can use the --local flag when applying the configuration. When you create a cluster using the --local flag, its name will be prefixed with local. This prefix is added to prevent any conflicts that might arise when creating new virtual resources.

kubitect apply --local\n

The resulting cluster will be created in ./.kubitect/clusters/local-<cluster-name> directory.

"},{"location":"getting-started/other/troubleshooting/","title":"Troubleshooting","text":"

Is your issue not listed here?

If the troubleshooting page is missing an error you encountered, please report it on GitHub by opening an issue. By doing so, you will help improve the project and help others find the solution to the same problem faster.

"},{"location":"getting-started/other/troubleshooting/#troubleshooting","title":"Troubleshooting","text":""},{"location":"getting-started/other/troubleshooting/#general-errors","title":"General errors","text":""},{"location":"getting-started/other/troubleshooting/#virtualenv-not-found","title":"Virtualenv not found","text":"Error Explanation Solution

Error

Output: /bin/sh: 1: virtualenv: not found

/bin/sh: 2: ansible-playbook: not found

Explanation

The error indicates that the virtualenv is not installed.

Solution

There are many ways to install virtualenv. For all installation options you can refere to their official documentation - Virtualenv installation.

For example, virtualenv can be installed using pip.

First install pip.

sudo apt install python3-pip\n

Then install virtualenv using pip3.

pip3 install virtualenv\n

"},{"location":"getting-started/other/troubleshooting/#kvmlibvirt-errors","title":"KVM/Libvirt errors","text":""},{"location":"getting-started/other/troubleshooting/#failed-to-connect-socket-no-such-file-or-directory","title":"Failed to connect socket (No such file or directory)","text":"Error Explanation Solution

Error

Error: virError(Code=38, Domain=7, Message='Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory')

Explanation

The problem may occur when libvirt is not started.

Solution

Make sure that the libvirt service is running:

sudo systemctl status libvirtd\n

If the libvirt service is not running, start it:

sudo systemctl start libvirtd\n

Optional: Start the libvirt service automatically at boot time:

sudo systemctl enable libvirtd\n

"},{"location":"getting-started/other/troubleshooting/#failed-to-connect-socket-permission-denied","title":"Failed to connect socket (Permission denied)","text":"Error Explanation Solution

Error

Error: virError(Code=38, Domain=7, Message='Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied')

Explanation

The error indicates that either the libvirtd service is not running or the current user is not in the libvirt (or kvm) group.

Solution

If the libvirtd service is not running, start it:

sudo systemctl start libvirtd\n

Add the current user to the libvirt and kvm groups if needed:

# Add current user to groups\nsudo adduser $USER libvirt\nsudo adduser $USER kvm\n\n# Verify groups are added\nid -nG\n\n# Reload user session\n

"},{"location":"getting-started/other/troubleshooting/#error-creating-libvirt-domain","title":"Error creating libvirt domain","text":"Error Explanation Solution

Error

Error: Error creating libvirt domain: \u2026 Could not open '/tmp/terraform_libvirt_provider_images/image.qcow2': Permission denied')

Explanation

The error indicates that the file cannot be created in the specified location due to missing permissions.

  • Make sure the directory exists.
  • Make sure the directory of the file that is being denied has appropriate user permissions.
  • Optionally qemu security driver can be disabled.

Solution

Make sure the security_driver in /etc/libvirt/qemu.conf is set to none instead of selinux. This line is commented out by default, so you should uncomment it if needed:

# /etc/libvirt/qemu.conf\n\n...\nsecurity_driver = \"none\"\n...\n

Do not forget to restart the libvirt service after making the changes:

sudo systemctl restart libvirtd\n

"},{"location":"getting-started/other/troubleshooting/#libvirt-domain-already-exists","title":"Libvirt domain already exists","text":"Error Explanation Solution

Error

Error: Error defining libvirt domain: virError(Code=9, Domain=20, Message='operation failed: domain 'your-domain' already exists with uuid '...')

Explanation

The error indicates that the libvirt domain (virtual machine) already exists.

Solution

The resource you are trying to create already exists. Make sure you destroy the resource:

virsh destroy your-domain\nvirsh undefine your-domain\n

You can verify that the domain was successfully removed:

virsh dominfo --domain your-domain\n

If the domain was successfully removed, the output should look something like this:

error: failed to get domain 'your-domain'

"},{"location":"getting-started/other/troubleshooting/#libvirt-volume-already-exists","title":"Libvirt volume already exists","text":"Error Explanation Solution

Error

Error: Error creating libvirt volume: virError(Code=90, Domain=18, Message='storage volume 'your-volume.qcow2' exists already')

and / or

Error:Error creating libvirt volume for cloudinit device cloud-init.iso: virError(Code=90, Domain=18, Message='storage volume 'cloud-init.iso' exists already')

Explanation

The error indicates that the specified volume already exists.

Solution

Volumes created by Libvirt are still attached to the images, which prevents a new volume from being created with the same name. Therefore, these volumes must be removed:

virsh vol-delete cloud-init.iso --pool your_resource_pool

and / or

virsh vol-delete your-volume.qcow2 --pool your_resource_pool

"},{"location":"getting-started/other/troubleshooting/#libvirt-storage-pool-already-exists","title":"Libvirt storage pool already exists","text":"Error Explanation Solution

Error

Error: Error storage pool 'your-pool' already exists

Explanation

The error indicates that the libvirt storage pool already exists.

Solution

Remove the existing libvirt storage pool.

virsh pool-destroy your-pool && virsh pool-undefine your-pool

"},{"location":"getting-started/other/troubleshooting/#failed-to-apply-firewall-rules","title":"Failed to apply firewall rules","text":"Error Explanation Solution

Error

Error: internal error: Failed to apply firewall rules /sbin/iptables -w --table filter --insert LIBVIRT_INP --in-interface virbr2 --protocol tcp --destination-port 67 --jump ACCEPT: iptables: No chain/target/match by that name.

Explanation

Libvirt was already running when firewall (usually FirewallD) was started/installed. Therefore, libvirtd service must be restarted to detect the changes.

Solution

Restart the libvirtd service:

sudo systemctl restart libvirtd\n

"},{"location":"getting-started/other/troubleshooting/#failed-to-remove-storage-pool","title":"Failed to remove storage pool","text":"Error Explanation Solution

Error

Error: error deleting storage pool: failed to remove pool '/var/lib/libvirt/images/k8s-cluster-main-resource-pool': Directory not empty

Explanation

The pool cannot be deleted because there are still some volumes in the pool. Therefore, the volumes should be removed before the pool can be deleted.

Solution

  1. Make sure the pool is running.

    virsh pool-start --pool k8s-cluster-main-resource-pool\n

  2. List volumes in the pool.

    virsh vol-list --pool k8s-cluster-main-resource-pool\n\n#  Name         Path\n# -------------------------------------------------------------------------------------\n#  base_volume  /var/lib/libvirt/images/k8s-cluster-main-resource-pool/base_volume\n

  3. Delete listed volumes from the pool.

    virsh vol-delete --pool k8s-cluster-main-resource-pool --vol base_volume\n

  4. Destroy and undefine the pool.

    virsh pool-destroy --pool k8s-cluster-main-resource-pool\nvirsh pool-undefine --pool k8s-cluster-main-resource-pool\n

"},{"location":"getting-started/other/troubleshooting/#haproxy-load-balancer-errors","title":"HAProxy load balancer errors","text":""},{"location":"getting-started/other/troubleshooting/#random-haproxy-503-bad-gateway","title":"Random HAProxy (503) bad gateway","text":"Error Explanation Solution

Error

HAProxy returns a random HTTP 503 (Bad gateway) error.

Explanation

More than one HAProxy processes are listening on the same port.

Solution 1

For example, if an error is thrown when accessing port 80, check which processes are listening on port 80 on the load balancer VM:

netstat -lnput | grep 80\n\n# Proto Recv-Q Send-Q Local Address           Foreign Address   State       PID/Program name\n# tcp        0      0 192.168.113.200:80      0.0.0.0:*         LISTEN      1976/haproxy\n# tcp        0      0 192.168.113.200:80      0.0.0.0:*         LISTEN      1897/haproxy\n

If you see more than one process, kill the unnecessary process:

kill 1976\n

Note: You can kill all HAProxy processes and only one will be automatically recreated.

Solution 2

Check the HAProxy configuration file (config/haproxy/haproxy.cfg) that it does not contain 2 frontends bound to the same port.

"},{"location":"user-guide/before-you-begin/","title":"Before you begin","text":"

The user guide is divided into three subsections: Cluster Management, Configuration and Reference. The Cluster Management subsection introduces the operations that can be performed over the cluster. The Configuration subsection contains explanations of the configurable Kubitect properties. Finally, the Reference subsection contains a configuration and CLI reference.

The following symbol conventions are used throughout the user guide:

  • - Indicates the Kubitect version in which the property was either added or last modified.
  • - Indicates that the property is required in every valid configuration.
  • - Indicates the default value of the property.
  • - Indicates that the feature or property is experimental (not yet stable). This means that its implementation may change drastically over time and that its activation may lead to unexpected behavior.
"},{"location":"user-guide/before-you-begin/#before-you-begin","title":"Before you begin","text":""},{"location":"user-guide/configuration/addons/","title":"Addons","text":""},{"location":"user-guide/configuration/addons/#addons","title":"Addons","text":""},{"location":"user-guide/configuration/addons/#configuration","title":"Configuration","text":""},{"location":"user-guide/configuration/addons/#kubespray-addons","title":"Kubespray addons","text":"

v2.1.0

Kubespray provides a variety of configurable addons to enhance the functionality of Kubernetes. Some popular addons include the Ingress-NGINX controller and MetalLB.

Kubespray addons can be configured under the addons.kubespray property. It's important to note that the Kubespray addons are configured in the same as they would be for Kubespray itself, as Kubitect copies the provided configuration into Kubespray's group variables during cluster creation.

The full range of available addons can be explored in the Kubespray addons sample, which is available on GitHub. Most addons are also documented in the official Kubespray documentation.

addons:\n  kubespray:\n\n    # Nginx ingress controller deployment\n    ingress_nginx_enabled: true\n    ingress_nginx_namespace: \"ingress-nginx\"\n    ingress_nginx_insecure_port: 80\n    ingress_nginx_secure_port: 443\n\n    # MetalLB deployment\n    metallb_enabled: true\n    metallb_speaker_enabled: true\n    metallb_ip_range:\n      - \"10.10.9.201-10.10.9.254\"\n    metallb_pool_name: \"default\"\n    metallb_auto_assign: true\n    metallb_version: v0.12.1\n    metallb_protocol: \"layer2\"\n
"},{"location":"user-guide/configuration/addons/#rook-addon","title":"Rook addon","text":"

v2.2.0 Experimental

Rook is an orchestration tool that integrates Ceph with Kubernetes. Ceph is a highly reliable and scalable storage solution, and Rook simplifies its management by automating the deployment, scaling and management of Ceph clusters.

To enable Rook in Kubitect, set addons.rook.enabled property to true.

addons:\n  rook:\n    enabled: true\n

Note that Rook is deployed only on worker nodes. When a cluster is created without worker nodes, Kubitect attempts to install Rook on the master nodes. In addition to enabling the Rook addon, at least one data disk must be attached to a node suitable for Rook deployment. If Kubitect determines that no data disks are available for Rook, it will skip installing Rook.

"},{"location":"user-guide/configuration/addons/#node-selector","title":"Node selector","text":"

The node selector is a dictionary of node labels used to determine which nodes are eligible for Rook deployment. If a node does not match all of the specified node labels, Rook resources cannot be deployed on that node and disks attached to that node are not used for distributed storage.

addons:\n  rook:\n    nodeSelector:\n      rook: true\n
"},{"location":"user-guide/configuration/addons/#version","title":"Version","text":"

By default, Kubitect uses the latest (master) version of Rook. If you want to use a specific version of Rook, you can set the addons.rook.version property to the desired version.

addons:\n  rook:\n    version: v1.11.3\n
"},{"location":"user-guide/configuration/cluster-name/","title":"Cluster name","text":""},{"location":"user-guide/configuration/cluster-name/#cluster-metadata","title":"Cluster metadata","text":""},{"location":"user-guide/configuration/cluster-name/#configuration","title":"Configuration","text":""},{"location":"user-guide/configuration/cluster-name/#cluster-name","title":"Cluster name","text":"

v2.0.0 Required

The cluster name must be defined in the Kubitect configuration, as it acts as a prefix for all cluster resources.

cluster:\n  name: my-cluster\n

For instance, each virtual machine name is generated as <cluster.name>-<node.type>-<node.instance.id>. Therefore, the name of the virtual machine for the worker node with ID 1 would be my-cluster-master-1.

Note

Cluster name cannot contain prefix local, as it is reserved for local clusters (created with --local flag).

"},{"location":"user-guide/configuration/cluster-network/","title":"Cluster network","text":"

Network section of the Kubitect configuration file defines the properties of the network to be created or the network to which the cluster nodes are to be assigned.

"},{"location":"user-guide/configuration/cluster-network/#cluster-network","title":"Cluster network","text":""},{"location":"user-guide/configuration/cluster-network/#configuration","title":"Configuration","text":""},{"location":"user-guide/configuration/cluster-network/#network-mode","title":"Network mode","text":"

v2.0.0 Required

Kubitect supports two network modes: NAT and bridge.

cluster:\n  network:\n    mode: nat\n
"},{"location":"user-guide/configuration/cluster-network/#nat-mode","title":"NAT mode","text":"

In NAT (Network Address Translation) mode, the libvirt virtual network is created for the cluster, which reduces the need for manual configurations. However, it's limited to a single host, i.e., a single physical server.

"},{"location":"user-guide/configuration/cluster-network/#bridge-mode","title":"Bridge mode","text":"

In bridge mode, a real host network device is shared with the virtual machines, allowing each virtual machine to bind to any available IP address on the local network, just like a physical computer. This approach makes the virtual machine visible on the network, enabling the creation of clusters across multiple physical servers.

To use bridged networks, you need to preconfigure the bridge interface on each target host. This is necessary because each environment is unique. For instance, you might use link aggregation (also known as link bonding or teaming), which cannot be detected automatically and therefore requires manual configuration. The Network bridge example provides instructions on how to create a bridge interface with netplan and configure Kubitect to use it.

"},{"location":"user-guide/configuration/cluster-network/#network-cidr","title":"Network CIDR","text":"

v2.0.0 Required

The network CIDR (Classless Inter-Domain Routing) represents the network in the form of <network_ip>/<network_prefix_bits>. All IP addresses specified in the cluster section of the configuration must be within this network range, including the network gateway, node instances, floating IP of the load balancer, and so on.

In NAT network mode, the network CIDR defines an unused private network that is created. In bridge mode, the network CIDR should specify the network to which the cluster belongs.

cluster:\n  network:\n    cidr: 192.168.113.0/24 # (1)!\n
  1. In nat mode - Any unused private network within a local network.

    In bridge mode - A network to which the cluster belongs.

"},{"location":"user-guide/configuration/cluster-network/#network-gateway","title":"Network gateway","text":"

v2.0.0

The network gateway, also known as the default gateway, represents the IP address of the router. By default, it doesn't need to be specified, as the first client IP in the network range is used as the gateway address. However, if the gateway IP differs from this, it must be specified manually.

cluster:\n  network:\n    cidr: 10.10.0.0/20\n    gateway: 10.10.0.230 # (1)!\n
  1. If this option is omitted, 10.10.0.1 is used as the gateway IP (first client IP in the network range).
"},{"location":"user-guide/configuration/cluster-network/#network-bridge","title":"Network bridge","text":"

v2.0.0 Default: virbr0

The network bridge determines the bridge interface that virtual machines connect to.

In NAT network mode, a virtual network bridge interface is created on the host. These bridges are usually prefixed with vir, such as virbr44. If you omit this option, the virtual bridge name is automatically determined by libvirt. Alternatively, you can specify the name to be used for the virtual bridge.

In bridge network mode, the network bridge should be the name of the preconfigured bridge interface, such as br0.

cluster:\n  network:\n    bridge: br0\n
"},{"location":"user-guide/configuration/cluster-network/#example-usage","title":"Example usage","text":""},{"location":"user-guide/configuration/cluster-network/#virtual-nat-network","title":"Virtual NAT network","text":"

If the cluster is created on a single host, you can use the NAT network mode. In this case, you only need to specify the CIDR of the new network in addition to the network mode.

cluster:\n  network:\n    mode: nat\n    cidr: 192.168.113.0/24\n
"},{"location":"user-guide/configuration/cluster-network/#bridged-network","title":"Bridged network","text":"

To make the cluster nodes visible on the local network as physical machines or to create the cluster across multiple hosts, you must use bridge network mode. Additionally, you need to specify the network CIDR of an existing network along with the preconfigured host bridge interface.

cluster:\n  network:\n    mode: bridge\n    cidr: 10.10.64.0/24\n    bridge: br0\n
"},{"location":"user-guide/configuration/cluster-node-template/","title":"Cluster node template","text":"

The node template section of the cluster configuration defines the properties of all nodes in the cluster. This includes the properties of the operating system (OS), DNS, and the virtual machine user.

"},{"location":"user-guide/configuration/cluster-node-template/#cluster-node-template","title":"Cluster node template","text":""},{"location":"user-guide/configuration/cluster-node-template/#configuration","title":"Configuration","text":""},{"location":"user-guide/configuration/cluster-node-template/#virtual-machine-user","title":"Virtual machine user","text":"

v2.0.0 Default: k8s

The user property defines the name of the user created on each virtual machine. This user is used to access the virtual machines during cluster configuration. If you omit the user property, a user named k8s is created on all virtual machines. You can also use this user later to access each virtual machine via SSH.

cluster:\n  nodeTemplate:\n    user: kubitect\n
"},{"location":"user-guide/configuration/cluster-node-template/#operating-system-os","title":"Operating system (OS)","text":""},{"location":"user-guide/configuration/cluster-node-template/#os-distribution","title":"OS distribution","text":"

v2.1.0 Default: ubuntu

The operating system for virtual machines can be specified in the node template. By default, the Ubuntu distribution is installed on all virtual machines.

You can select a desired distribution by setting the os.distro property.

cluster:\n  nodeTemplate:\n    os:\n      distro: debian # (1)!\n
  1. By default, ubuntu is used.

The available operating system distribution presets are:

  • ubuntu - Latest Ubuntu 22.04 release. (default)
  • ubuntu22 - Ubuntu 22.04 release as of 2023-10-26.
  • ubuntu20 - Ubuntu 20.04 release as of 2023-10-11.
  • debian - Latest Debian 11 release.
  • debian11 - Debian 11 release as of 2023-10-13.
  • rocky - Latest Rocky 9 release.
  • rocky9 - Rocky 9.2 release as of 2023-05-13.
  • centos - Latest CentOS Stream 9 release.
  • centos9 - CentOS Stream 9 release as of 2023-10-23.

Important

Rocky Linux and CentOS Stream both require the x86-64-v2 instruction set to run. If the CPU mode property is not set to host-passthrough, host-model, or maximum, the virtual machine may not be able to boot properly.

Known issues

CentOS Stream images already include the qemu-guest-agent package, which reports IP addresses of the virtual machines before they are leased from a DHCP server. This can cause issues during infrastructure provisioning if the virtual machines are not configured with static IP addresses.

Where are images downloaded from?

Images are sourced from the official cloud image repository for the corresponding Linux distribution.

  • Ubuntu: Ubuntu cloud image repository
  • Debian: Debian cloud image repository
  • CentOS: CentOS cloud image repositroy
  • Rocky: Rocky cloud image repositroy
"},{"location":"user-guide/configuration/cluster-node-template/#os-source","title":"OS source","text":"

v2.1.0

If the presets do not meet your needs, you can use a custom Ubuntu or Debian image by specifying the image source. The source of an image can be either a local path on your system or a URL pointing to the image download.

cluster:\n  nodeTemplate:\n    os:\n      distro: ubuntu\n      source: https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img\n
"},{"location":"user-guide/configuration/cluster-node-template/#network-interface","title":"Network interface","text":"

v2.1.0

Generally, this setting does not have to be set, as Kubitect will correctly evaluate the network interface name to be used on each virtual machine.

If you want to instruct Kubitect to use a specific network interface on the virtual machine, you can set its name using the os.networkInterface property.

cluster:\n  nodeTemplate:\n    os:\n      networkInterface: ens3\n
"},{"location":"user-guide/configuration/cluster-node-template/#custom-dns-list","title":"Custom DNS list","text":"

v2.1.0

The configuration of Domain Name Servers (DNS) in the node template allows for customizing the DNS resolution of all virtual machines in the cluster. By default, the DNS list contains only the network gateway.

To add custom DNS servers, specify them using the dns property in the node template.

cluster:\n  nodeTemplate:\n    dns: # (1)!\n      - 1.1.1.1\n      - 1.0.0.1\n
  1. IP addresses 1.1.1.1 and 1.0.0.1 represent CloudFlare's primary and secondary public DNS resolvers, respectively.
"},{"location":"user-guide/configuration/cluster-node-template/#cpu-mode","title":"CPU mode","text":"

v2.2.0 Default: custom

The cpuMode property in the node template can be used to configure a guest CPU to closely resemble the host CPU.

cluster:\n  nodeTemplate:\n    cpuMode: host-passthrough\n

Currently, there are several CPU modes available:

  • custom (default)
  • host-model
  • host-passthrough
  • maximum

In short, the host-model mode uses the same CPU model as the host, while the host-passthrough mode provides full CPU feature set to the guest virtual machine, but may impact its live migration. The maximum mode selects the CPU with the most available features. For a more detailed explanation of the available CPU modes and their usage, please refer to the libvirt documentation.

Tip

The host-model and host-passthrough modes makes sense only when a virtual machine can run directly on the host CPUs (e.g. virtual machines of type kvm). The actual host CPU is irrelevant for virtual machines with emulated virtual CPUs (e.g. virtul machines of type qemu).

"},{"location":"user-guide/configuration/cluster-node-template/#update-on-boot","title":"Update on boot","text":"

v2.2.0 Default: true

By default, Kubitect updates all virtual machine packages on boot. To disable this behavior, set updateOnBoot to false.

cluster:\n  nodeTemplate:\n    updateOnBoot: false\n
"},{"location":"user-guide/configuration/cluster-node-template/#ssh-options","title":"SSH options","text":""},{"location":"user-guide/configuration/cluster-node-template/#custom-ssh-certificate","title":"Custom SSH certificate","text":"

v2.0.0

Kubitect automatically generates SSH certificates before deploying the cluster to ensure secure communication between nodes. The generated certificates can be found in the config/.ssh/ directory inside the cluster directory.

If you prefer to use a custom SSH certificate, you can specify the local path to the private key. Note that the public key must also be present in the same directory with the .pub suffix.

cluster:\n  nodeTemplate:\n    ssh:\n      privateKeyPath: \"~/.ssh/id_rsa_test\"\n

Important

SSH certificates must be passwordless, otherwise Kubespray will fail to configure the cluster.

"},{"location":"user-guide/configuration/cluster-node-template/#adding-nodes-to-the-known-hosts","title":"Adding nodes to the known hosts","text":"

v2.0.0 Default: false

Kubitect allows you to add all created virtual machines to SSH known hosts and remove them once the cluster is destroyed. To enable this behavior, set the addToKnownHosts property to true.

cluster:\n  nodeTemplate:\n    ssh:\n      addToKnownHosts: true\n
"},{"location":"user-guide/configuration/cluster-nodes/","title":"Cluster nodes","text":""},{"location":"user-guide/configuration/cluster-nodes/#cluster-nodes","title":"Cluster nodes","text":""},{"location":"user-guide/configuration/cluster-nodes/#background","title":"Background","text":"

Kubitect allows configuration of three distinct node types: worker nodes, master nodes (control plane), and load balancers.

"},{"location":"user-guide/configuration/cluster-nodes/#worker-nodes","title":"Worker nodes","text":"

Worker nodes in a Kubernetes cluster are responsible for executing the application workloads of the system. The addition of more worker nodes to the cluster enhances redundancy in case of worker node failure. However, allocating more resources to each worker node provides less overhead and more resources for the actual applications.

Kubitect does not offer automatic scaling of worker nodes based on resource demand. However, you can easily add or remove worker nodes by applying modified cluster configuration.

"},{"location":"user-guide/configuration/cluster-nodes/#master-nodes","title":"Master nodes","text":"

The master node plays a vital role in a Kubernetes cluster as it manages the overall state of the system and coordinates the workloads running on the worker nodes. Therefore, it is essential to configure at least one master node for every cluster.

Please note that Kubitect currently supports only a stacked control plane where etcd key-value stores are deployed on control plane nodes. To ensure the best possible fault tolerance, it is important to configure an odd number of control plane nodes. For more information, please refer to the etcd FAQ.

"},{"location":"user-guide/configuration/cluster-nodes/#load-balancer-nodes","title":"Load balancer nodes","text":"

In a Kubernetes cluster with multiple control plane nodes, it is necessary to configure at least one load balancer. A load balancer distributes incoming network traffic across multiple control plane nodes, ensuring the cluster operates normally even if any control plane node fails.

However, configuring only one load balancer represents a single point of failure for the cluster. If it fails, incoming traffic will not be distributed to the control plane nodes, potentially resulting in downtime. Therefore, configuring multiple load balancers is essential to ensure high availability for the cluster.

"},{"location":"user-guide/configuration/cluster-nodes/#nodes-configuration-structure","title":"Nodes configuration structure","text":"

The configuration structure for the nodes is as follows:

cluster:\n  nodes:\n    masters:\n      ...\n    workers:\n      ...\n    loadBalancers:\n      ...\n

Each node type has two subsections: default and instances. The instances subsection represents an array of actual nodes, while the default subsection provides the configuration that is applied to all instances of a particular node type. Each default value can also be overwritten by setting the same property for a specific instance.

cluster:\n  nodes:\n    <node-type>:\n      default:\n        ...\n      instances:\n        ...\n
"},{"location":"user-guide/configuration/cluster-nodes/#configuration","title":"Configuration","text":""},{"location":"user-guide/configuration/cluster-nodes/#common-node-properties","title":"Common node properties","text":"

Each node instance has a set of predefined properties that can be set to configure its behavior. Some properties apply to all node types, while others are specific to a certain node type. Properties that apply to all node types are referred to as common properties.

"},{"location":"user-guide/configuration/cluster-nodes/#instance-id","title":"Instance ID","text":"

v2.3.0 Required

Each node in a cluster must have a unique identifier, or ID, that distinguishes it from other instances of the same node type. The instance ID is used as a suffix for the name of each node, ensuring that each node has a unique name in the cluster.

cluster:\n  nodes:\n    <node-type>:\n      instances:\n        - id: 1\n        - id: compute-1\n        - id: 77\n
"},{"location":"user-guide/configuration/cluster-nodes/#cpu","title":"CPU","text":"

v2.0.0 Default: 2 vCPU

The cpu property defines the amount of virtual CPU cores assigned to a node instance. This property can be set for a specific instance, or as a default value for all instances of a certain node type.

cluster:\n  nodes:\n    <node-type>:\n      default:\n        cpu: 2\n      instances:\n        - id: 1 # (1)!\n        - id: 2\n          cpu: 4 # (2)!\n
  1. Since the cpu property is not set for this instance, the default value (2) is used.

  2. This instance has the cpu property set, and therefore the set value (4) overrides the default value (2).

If the property is not set at the instance level or as a default value, Kubitect uses its own default value (2).

cluster:\n  nodes:\n    <node-type>:\n      instances:\n        - id: 1 # (1)!\n
  1. Since the cpu property is not set at instance level or as a default value, Kubitect sets the value of the cpu property to 2 vCPU.
"},{"location":"user-guide/configuration/cluster-nodes/#ram","title":"RAM","text":"

v2.0.0 Default: 4 GiB

The ram property defines the amount of RAM assigned to a node instance (in GiB). This property can be set for a specific instance, or as a default value for all instances of a certain node type.

cluster:\n  nodes:\n    <node-type>:\n      default:\n        ram: 8\n      instances:\n        - id: 1 # (1)!\n        - id: 2\n          ram: 16 # (2)!\n
  1. Since the ram property is not set for this instance, the default value (8 GiB) is used.

  2. This instance has the ram property set, and therefore the set value (16 GiB) overrides the default value (8 GiB).

If the property is not set at the instance level or as a default value, Kubitect uses its own default value (4 GiB).

cluster:\n  nodes:\n    <node-type>:\n      instances:\n        - id: 1 # (1)!\n
  1. Since the ram property is not set at instance level or as a default value, Kubitect sets the value of the ram property to 4 GiB.
"},{"location":"user-guide/configuration/cluster-nodes/#main-disk-size","title":"Main disk size","text":"

v2.0.0 Default: 32 GiB

The mainDiskSize property defines the amount of disk space assigned to a node instance (in GiB). This property can be set for a specific instance, or as a default value for all instances of a certain node type.

cluster:\n  nodes:\n    <node-type>:\n      default:\n        mainDiskSize: 128\n      instances:\n        - id: 1 # (1)!\n        - id: 2\n          mainDiskSize: 256 # (2)!\n
  1. Since the mainDiskSize property is not set for this instance, the default value (128 GiB) is used.

  2. This instance has the mainDiskSize property set, so therefore the set value (256 GiB) overrides the default value (128 GiB).

If the property is not set at the instance level or as a default value, Kubitect uses its own default value (32 GiB).

cluster:\n  nodes:\n    <node-type>:\n      instances:\n        - id: 1 # (1)!\n
  1. Since the mainDiskSize property is not set at instance level or as a default value, Kubitect sets the value of the mainDiskSize property to 32 GiB.
"},{"location":"user-guide/configuration/cluster-nodes/#ip-address","title":"IP address","text":"

v2.0.0

Each node in a cluster can be assigned a static IP address to ensure a predictable and consistent IP address for the node. If no IP address is set for a particular node, Kubitect will request a DHCP lease for that node. Additionally, Kubitect checks whether all set IP addresses are within the defined network range, as explained in the Network CIDR section of the cluster network configuration.

cluster:\n  network:\n    mode: nat\n    cidr: 192.168.113.0/24\n  nodes:\n    <node-type>:\n      instances:\n        - id: 1\n          ip: 192.168.113.5 # (1)!\n        - id: 2 # (2)!\n
  1. A static IP (192.168.113.5) is set for this instance.

  2. Since no IP address is defined for this instance, a DHCP lease is requested.

"},{"location":"user-guide/configuration/cluster-nodes/#mac-address","title":"MAC address","text":"

v2.0.0

The virtual machines created by Kubitect are assigned generated MAC addresses, but a custom MAC address can be set for a virtual machine if necessary.

cluster:\n  nodes:\n    <node-type>:\n      instances:\n        - id: 1\n          mac: \"52:54:00:00:13:10\" # (1)!\n        - id: 2 # (2)!\n
  1. A custom MAC address (52:54:00:00:13:10) is set for this instance.

  2. Since no MAC address is defined for this instance, the MAC address is generated during cluster creation.

"},{"location":"user-guide/configuration/cluster-nodes/#host-affinity","title":"Host affinity","text":"

v2.0.0

By default, all instances in a cluster are deployed on the default host. However, by specifying a specific host for an instance, you can control where that instance is deployed

hosts:\n  - name: host1\n    ...\n  - name: host2\n    default: true\n    ...\n\ncluster:\n  nodes:\n    <node-type>:\n      instances:\n        - id: 1\n          host: host1 # (1)!\n        - id: 2 # (2)!\n
  1. The instance is deployed on host1.

  2. Since no host is specified, the instance is deployed on the default host (host2).

"},{"location":"user-guide/configuration/cluster-nodes/#control-plane-and-worker-node-properties","title":"Control plane and worker node properties","text":"

The following properties can only be configured for control plane or worker nodes.

"},{"location":"user-guide/configuration/cluster-nodes/#data-disks","title":"Data disks","text":"

v2.2.0

By default, only a main disk (volume) is attached to each provisioned virtual machine. Since the main disk already contains an operating system, it may not be suitable for storing data, and additional disks may be required. For example, Rook can be easily configured to use all the empty disks attached to the virtual machine to form a storage cluster.

A name and size (in GiB) must be configured for each data disk. By default, data disks are created in the main resource pool. To create a data disk in a custom data resource pool, you can set the pool property to the name of the desired data resource pool. Additionally, note that the data disk name must be unique among all data disks for a given instance.

cluster:\n  nodes:\n    <node-type>:\n      instances:\n        - id: 1\n          dataDisks:\n            - name: data-volume\n              pool: main # (1)!\n              size: 256\n            - name: rook-volume\n              pool: rook-pool # (2)!\n              size: 512\n
  1. When pool property is omitted or set to main, the data disk is created in the main resource pool.

  2. Custom data resource pool must be configured in the hosts section.

"},{"location":"user-guide/configuration/cluster-nodes/#node-labels","title":"Node labels","text":"

v2.1.0

With node labels, you can help organize and manage your cluster by associating nodes with specific attributes or roles, and by grouping nodes for specific workloads or tasks.

Node labels are used to label actual Kubernetes nodes and can be set for a specific instance or as a default value for all instances. It is important to note that labels set at the instance level are merged with the default labels. However, if labels have the same key, then the labels set at the instance level take precedence over the default labels.

cluster:\n  nodes:\n    <node-type>: # (1)!\n      default:\n        labels:\n          key1: def-value-1\n          key2: def-value-2\n      instances:\n        - id: 1\n          labels: # (2)!\n            key1: custom-value\n        - id: 2\n          labels: # (3)!\n            key3: super-node\n
  1. Node labels can only be applied to worker and master (control plane) nodes.

  2. Labels defined at the instance level take precedence over default labels. As a result, the following labels are applied to this instance:

    • key1: custom-value
    • key2: def-value-2
  3. Labels defined at the instance level are merged with default labels. As a result, the following labels are applied to this instance:

    • key1: def-value-1
    • key2: def-value-2
    • key3: super-node
"},{"location":"user-guide/configuration/cluster-nodes/#node-taints","title":"Node taints","text":"

v2.2.0

With node taints, you can limit which pods can be scheduled to run on a particular node, and help ensure that the workload running on that node is appropriate for its capabilities and resources.

Node taints are configured as a list of strings in the format key=value:effect. Taints can be set for a specific instance or as a default value for all instances. When taints are set for a particular instance, they are merged with the default taints, and any duplicate entries are removed.

cluster:\n  nodes:\n    <node-type>: # (1)!\n      default:\n        taints:\n          - \"key1=value1:NoSchedule\"\n      instances:\n        - id: 1\n          taints:\n            - \"key2=value2:NoExecute\"\n
  1. Node taints can only be applied to control plane (master) and worker nodes.
"},{"location":"user-guide/configuration/cluster-nodes/#load-balancer-properties","title":"Load balancer properties","text":"

The following properties can only be configured for load balancers.

"},{"location":"user-guide/configuration/cluster-nodes/#virtual-ip-address-vip","title":"Virtual IP address (VIP)","text":"

v2.0.0

What is VIP?

Load balancers are responsible for distributing traffic to the control plane nodes. However, a single load balancer can cause issues if it fails. To avoid this, multiple load balancers can be configured with one as the primary, actively serving incoming traffic, while others act as secondary and take over the primary position only if the primary load balancer fails. If a secondary load balancer becomes primary, it should still be reachable via the same IP, which is referred to as a virtual or floating IP (VIP).

When multiple load balancers are configured, an unused IP address within the configured network must be specified as the VIP.

cluster:\n  nodes:\n    loadBalancer:\n      vip: 168.192.113.200\n
"},{"location":"user-guide/configuration/cluster-nodes/#virtual-router-id-vrid","title":"Virtual router ID (VRID)","text":"

v2.1.0 Default: 51

When a cluster is created with a VIP, Kubitect configures Virtual Router Redundancy Protocol (VRRP), which provides failover for load balancers. Each VRRP group is identified by a virtual router ID (VRID), which can be any number between 0 and 255. Since there can be only one master in each group, two groups cannot have the same ID.

By default, Kubitect sets the VRID to 51, but if you set up multiple clusters that use VIP, you must ensure that the VRID is different for each cluster.

cluster:\n  nodes:\n    loadBalancer:\n      vip: 168.192.113.200\n      virtualRouterId: 30\n
"},{"location":"user-guide/configuration/cluster-nodes/#priority","title":"Priority","text":"

v2.1.0 Default: 10

Each load balancer has a priority that is used to select a primary load balancer. The one with the highest priority becomes the primary and all others become secondary. If the primary load balancer fails, the next one with the highest priority takes over. If two load balancers have the same priority, the one with the higher sum of IP address digits is selected.

The priority can be any number between 0 and 255. The default priority is 10.

cluster:\n  nodes:\n    loadBalancer:\n      instances:\n        - id: 1 # (1)!\n        - id: 2\n          priority: 200 # (2)!\n
  1. Since the load balancer priority for this instance is not specified, it is set to 10.

  2. Since this load balancer instance has the highest priority (200 > 10), it becomes the primary load balancer.

"},{"location":"user-guide/configuration/cluster-nodes/#port-forwarding","title":"Port forwarding","text":"

v2.1.0

By default, each configured load balancer has a port forwarding rule that distribute incoming traffic on port 6443 across the available control plane nodes. However, Kubitect provides the flexibility to configure additional user-defined port forwarding rules.

The following properties can be configured for each rule:

  • name - A unique port identifier.
  • port - The incoming port on which the load balancer listens for traffic.
  • targetPort - The port to which traffic is forwarded by the load balancer.
  • target - The group of nodes to which traffic is directed. The possible targets are:
    • masters - control plane nodes
    • workers - worker nodes
    • all - worker and control plane nodes.

Every port forwarding rule must be configured with a unique name and port. The name serves as a unique identifier for the rule, while the port specifies the incoming port on which the load balancer listens for traffic.

The target and targetPort configurations are optional. If target port is not explicitly set, it will default to the same value as the incoming port. Similarly, if target is not set, incoming traffic is automatically distributed across worker nodes.

cluster:\n  nodes:\n    loadBalancer:\n      forwardPorts:\n        - name: https\n          port: 443 # (1)!\n          targetPort: 31200 # (2)!\n          target: all # (3)!\n
  1. Incoming port is the port on which a load balancer listens for incoming traffic. It can be any number between 1 and 65353, excluding ports 6443 (Kubernetes API server) and 22 (SSH).

  2. Target port is the port on which the traffic is forwarded. By default, it is set to the same value as the incoming port.

  3. Target represents a group of nodes to which incoming traffic is forwarded. Possible values are:

    • masters
    • workers
    • all

    If the target is not configured, it defaults to the workers.

"},{"location":"user-guide/configuration/cluster-nodes/#example-usage","title":"Example usage","text":""},{"location":"user-guide/configuration/cluster-nodes/#set-a-role-to-all-worker-nodes","title":"Set a role to all worker nodes","text":"

By default, worker nodes in a Kubernetes cluster are not assigned any roles (<none>). To set the role of all worker nodes in the cluster, the default label with the key node-role.kubernetes.io/node can be configured.

cluster:\n  nodes:\n    worker:\n      default:\n        labels:\n          node-role.kubernetes.io/node: # (1)!\n      instances:\n        ...\n
  1. If the label value is omitted, null is set as the label value.

The roles of the nodes in a Kubernetes cluster can be viewed using kubectl get nodes.

NAME                   STATUS   ROLES                  AGE   VERSION\nk8s-cluster-master-1   Ready    control-plane,master   19m   v1.27.5\nk8s-cluster-worker-1   Ready    node                   19m   v1.27.5\nk8s-cluster-worker-2   Ready    node                   19m   v1.27.5\n
"},{"location":"user-guide/configuration/cluster-nodes/#load-balance-http-requests","title":"Load balance HTTP requests","text":"

Kubitect enables users to define custom port forwarding rules on load balancers. For example, to distribute HTTP and HTTPS requests across all worker nodes, at least one load balancer must be specified and port forwarding must be configured as follows:

cluster:\n  nodes:\n    loadBalancer:\n      forwardPorts:\n        - name: http\n          port: 80\n        - name: https\n          port: 443\n      instances:\n        - id: 1\n
"},{"location":"user-guide/configuration/hosts/","title":"Hosts","text":"

Defining hosts is an essential step when deploying a Kubernetes cluster with Kubitect. Hosts represent the target servers where the cluster will be deployed.

Every valid configuration must contain at least one host, which can be either local or remote. However, you can add as many hosts as needed to support your cluster deployment.

"},{"location":"user-guide/configuration/hosts/#hosts-configuration","title":"Hosts configuration","text":""},{"location":"user-guide/configuration/hosts/#configuration","title":"Configuration","text":""},{"location":"user-guide/configuration/hosts/#localhost","title":"Localhost","text":"

v2.0.0

To configure a local host, you simply need to specify a host with the connection type set to local.

hosts:\n  - name: localhost # (1)!\n    connection:\n      type: local\n
  1. Custom unique name of the host.
"},{"location":"user-guide/configuration/hosts/#remote-hosts","title":"Remote hosts","text":"

v2.0.0

To configure a remote host, you need to set the connection type to remote and provide the IP address of the remote host, along with its SSH credentials.

hosts:\n  - name: my-remote-host\n    connection:\n      type: remote\n      user: myuser\n      ip: 10.10.40.143 # (1)!\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_server1\" # (2)!\n
  1. IP address of the remote host.

  2. Path to the password-less SSH key file required for establishing connection with the remote host. Default is ~/.ssh/id_rsa.

"},{"location":"user-guide/configuration/hosts/#hosts-ssh-port","title":"Host's SSH port","text":"

v2.0.0 Default: 22

By default, SSH uses port 22. If a host is running an SSH client on a different port, you can change the port for each host separately.

hosts:\n  - name: remote-host\n    connection:\n      type: remote\n      ssh:\n        port: 1234\n
"},{"location":"user-guide/configuration/hosts/#host-verification-known-ssh-hosts","title":"Host verification (known SSH hosts)","text":"

v2.0.0 Default: false

By default, remote hosts are not verified in the known SSH hosts. If you want to verify hosts, you can enable host verification for each host separately.

hosts:\n  - name: remote-host\n    connection:\n      type: remote\n      ssh:\n        verify: true\n
"},{"location":"user-guide/configuration/hosts/#default-host","title":"Default host","text":"

v2.0.0

If a host is specified as the default, all instances that do not point to a specific host are deployed to that default host. If no default host is specified, these instances are deployed on the first host in the list.

hosts:\n  - name: localhost\n    connection:\n      type: local\n  - name: default-host\n    default: true\n    ...\n
"},{"location":"user-guide/configuration/hosts/#main-resource-pool","title":"Main resource pool","text":"

v2.0.0 Default: /var/lib/libvirt/images/

The main resource pool path specifies the location on the host where main virtual disks (volumes) are created for each node provisioned on that particular host. Because the main resource pool contains volumes on which the node's operating system and all required packages are installed, it's recommended that the main resource pool is created on fast storage devices, such as SSD disks.

hosts:\n  - name: host1 # (1)!\n  - name: host2\n    mainResourcePoolPath: /mnt/ssd/kubitect/ # (2)!\n
  1. Because the main resource pool path for this host is not set, the default path (/var/lib/libvirt/images/) is used.

  2. The main resource pool path is set for this host, so the node's main disks are created in this location.

"},{"location":"user-guide/configuration/hosts/#data-resource-pools","title":"Data resource pools","text":"

v2.0.0

Data resource pools allow you to define additional resource pools, besides the required main resource pool. These pools can be used to attach additional virtual disks that can be used for various storage solutions, such as Rook or MinIO.

Multiple data resource pools can be defined on each host, and each pool must have a unique name on that host. The name of the data resource pool is used to associate the virtual disks defined in the node configuration with the actual data resource pool.

By default, the path of the data resources is set to /var/lib/libvirt/images, but it can be easily configured using the path property.

hosts:\n  - name: host1\n    dataResourcePools:\n      - name: rook-pool\n        path: /mnt/hdd/kubitect/pools/\n      - name: data-pool # (1)!\n
  1. If the path of the resource pool is not specified, it will be created under the path /var/lib/libvirt/images/.
"},{"location":"user-guide/configuration/hosts/#example-usage","title":"Example usage","text":""},{"location":"user-guide/configuration/hosts/#multiple-hosts","title":"Multiple hosts","text":"

Kubitect allows you to deploy a cluster on multiple hosts, which need to be specified in the configuration file.

hosts:\n  - name: localhost\n    connection:\n      type: local\n  - name: remote-host-1\n    connection:\n      type: remote\n      user: myuser\n      ip: 10.10.40.143\n      ssh:\n        port: 123\n        keyfile: \"~/.ssh/id_rsa_server1\"\n  - name: remote-host-2\n    default: true\n    connection:\n      type: remote\n      user: myuser\n      ip: 10.10.40.145\n      ssh:\n        keyfile: \"~/.ssh/id_rsa_server2\"\n  ...\n
"},{"location":"user-guide/configuration/kubernetes/","title":"Kubernetes","text":"

The Kubernetes section of the configuration file contains properties that are specific to Kubernetes, such as the Kubernetes version and network plugin.

"},{"location":"user-guide/configuration/kubernetes/#kubernetes-configuration","title":"Kubernetes configuration","text":""},{"location":"user-guide/configuration/kubernetes/#configuration","title":"Configuration","text":""},{"location":"user-guide/configuration/kubernetes/#kubernetes-version","title":"Kubernetes version","text":"

v3.0.0 Default: v1.27.5

By default, the Kubernetes cluster will be deployed using version v1.27.5, but you can specify a different version if necessary.

kubernetes:\n  version: v1.27.5\n

The supported Kubernetes versions include v1.25, v1.26, and v1.27.

"},{"location":"user-guide/configuration/kubernetes/#kubernetes-network-plugin","title":"Kubernetes network plugin","text":"

v2.0.0 Default: calico

The calico network plugin is deployed by default in a Kubernetes cluster. However, there are multiple supported network plugins available to choose from:

  • calico
  • cilium
  • flannel
  • kube-router
  • weave
kubernetes:\n  networkPlugin: flannel\n

The following table shows the compatibility matrix of supported network plugins and Kubernetes versions:

Kubernetes Version Calico Cilium Flannel KubeRouter Weave 1.25 1.26 1.27"},{"location":"user-guide/configuration/kubernetes/#kubernetes-dns-mode","title":"Kubernetes DNS mode","text":"

v2.0.0 Default: coredns

Currently, the only DNS mode supported by Kubitect is coredns. Therefore, it is safe to omit this property.

kubernetes:\n  dnsMode: coredns\n
"},{"location":"user-guide/configuration/kubernetes/#copy-kubeconfig","title":"Copy kubeconfig","text":"

v2.0.0 Default: false

Kubitect offers the option to automatically copy the Kubeconfig file to the ~/.kube/config path. By default, this feature is disabled to prevent overwriting an existing file.

kubernetes:\n  other:\n    copyKubeconfig: true\n
"},{"location":"user-guide/configuration/kubernetes/#auto-renew-control-plane-certificates","title":"Auto renew control plane certificates","text":"

v2.2.0 Default: false

Control plane certificates are renewed every time the cluster is upgraded, and their validity period is one year. However, in rare cases, clusters that are not upgraded frequently may experience issues. To address this, you can enable the automatic renewal of control plane certificates on the first Monday of each month by setting the autoRenewCertificates property to true.

kubernetes:\n  other:\n    autoRenewCertificates: true\n
"},{"location":"user-guide/management/destroying/","title":"Destroying the cluster","text":""},{"location":"user-guide/management/destroying/#destroying-the-cluster","title":"Destroying the cluster","text":""},{"location":"user-guide/management/destroying/#destroy-the-cluster","title":"Destroy the cluster","text":"

Important

This action is irreversible and any data stored within the cluster will be lost.

To destroy a specific cluster, simply run the destroy command, specifying the name of the cluster to be destroyed.

kubitect destroy --cluster my-cluster\n

Keep in mind that this action will permanently remove all resources associated with the cluster, including virtual machines, resource pools and configuration files.

"},{"location":"user-guide/management/scaling/","title":"Scaling the cluster","text":"

Any cluster created with Kubitect can be subsequently scaled. To do so, simply change the configuration and reapply it using the scale action.

Info

Currently, only worker nodes and load balancers can be scaled.

"},{"location":"user-guide/management/scaling/#scaling-the-cluster","title":"Scaling the cluster","text":""},{"location":"user-guide/management/scaling/#export-the-cluster-configuration","title":"Export the cluster configuration","text":"

Exporting the current cluster configuration is optional, but strongly recommended to ensure that changes are made to the latest version of the configuration. The cluster configuration file can be exported using the export command.

kubitect export config --cluster my-cluster > cluster.yaml\n
"},{"location":"user-guide/management/scaling/#scale-the-cluster","title":"Scale the cluster","text":"

In the configuration file, add new or remove existing nodes.

cluster.yaml
cluster:\n  ...\n  nodes:\n    ...\n    worker:\n      instances:\n        - id: 1\n        #- id: 2 # Worker node to be removed\n        - id: 3 # New worker node\n        - id: 4 # New worker node\n

Apply the modified configuration with action set to scale:

kubitect apply --config cluster.yaml --action scale\n

As a result, the worker node with ID 2 is removed and the worker nodes with IDs 3 and 4 are added to the cluster.

"},{"location":"user-guide/management/upgrading/","title":"Upgrading the cluster","text":"

A running Kubernetes cluster can be upgraded to a higher version by increasing the Kubernetes version in the cluster's configuration file and reapplying it using the upgrade action.

"},{"location":"user-guide/management/upgrading/#upgrading-the-cluster","title":"Upgrading the cluster","text":""},{"location":"user-guide/management/upgrading/#export-the-cluster-configuration","title":"Export the cluster configuration","text":"

Exporting the current cluster configuration is optional, but strongly recommended to ensure that changes are made to the latest version of the configuration. The cluster configuration file can be exported using the export command.

kubitect export config --cluster my-cluster > cluster.yaml\n
"},{"location":"user-guide/management/upgrading/#upgrade-the-cluster","title":"Upgrade the cluster","text":"

In the cluster configuration file, change the Kubernetes version.

cluster.yaml
kubernetes:\n  version: v1.24.5 # Old value: v1.23.6\n  ...\n

Apply the modified configuration using upgrade action.

kubitect apply --config cluster.yaml --action upgrade\n

The cluster is upgraded using the in-place strategy, i.e., the nodes are upgraded one after the other, making each node unavailable for the duration of its upgrade.

"},{"location":"user-guide/reference/cli/","title":"CLI tool reference","text":"

This document contains a reference of the Kubitect CLI tool. It documents each command along with its flags.

Tip

All available commands can be displayed by running kubitect --help or simply kubitect -h.

To see the help for a particular command, run kubitect command -h.

"},{"location":"user-guide/reference/cli/#cli-reference","title":"CLI reference","text":""},{"location":"user-guide/reference/cli/#kubitect-commands","title":"Kubitect commands","text":""},{"location":"user-guide/reference/cli/#kubitect-apply","title":"kubitect apply","text":"

Apply the cluster configuration.

Usage

kubitect apply [flags]\n

Flags

  • -a, --action <string> \u2003 cluster action: create | scale | upgrade (default: create)
  • --auto-approve \u2003 automatically approve any user permission requests
  • -c, --config <string> \u2003 path to the cluster config file
  • -l, --local \u2003 use a current directory as the cluster path
"},{"location":"user-guide/reference/cli/#kubitect-destroy","title":"kubitect destroy","text":"

Destroy the cluster with a given name. Executing the following command will permanently delete all resources associated with the cluster, including virtual machines and configuration files.

Important

Please be aware that this action is irreversible and any data stored within the cluster will be lost.

Usage

kubitect destroy [flags]\n

Flags

  • --auto-approve \u2003 automatically approve any user permission requests
  • --cluster <string> \u2003 name of the cluster to be used (default: default)
"},{"location":"user-guide/reference/cli/#kubitect-export-config","title":"kubitect export config","text":"

Print cluster's configuration file to the standard output.

Usage

kubitect export config [flags]\n

Flags

  • --cluster <string> \u2003 name of the cluster to be used (default: default)
"},{"location":"user-guide/reference/cli/#kubitect-export-kubeconfig","title":"kubitect export kubeconfig","text":"

Print cluster's kubeconfig to the standard output.

Usage

kubitect export kubeconfig [flags]\n

Flags

  • --cluster <string> \u2003 name of the cluster to be used (default: default)
"},{"location":"user-guide/reference/cli/#kubitect-export-preset","title":"kubitect export preset","text":"

Print cluster configuration preset to the standard output.

Usage

kubitect export preset [flags]\n

Flags

  • --name <string> \u2003 preset name
"},{"location":"user-guide/reference/cli/#kubitect-list-clusters","title":"kubitect list clusters","text":"

List clusters.

Usage

kubitect list clusters\n
"},{"location":"user-guide/reference/cli/#kubitect-list-presets","title":"kubitect list presets","text":"

List available cluster configuration presets.

Usage

kubitect list presets\n
"},{"location":"user-guide/reference/cli/#autogenerated-commands","title":"Autogenerated commands","text":""},{"location":"user-guide/reference/cli/#kubitect-completion","title":"kubitect completion","text":"

Generate the autocompletion script for Kubitect for the specified shell.

Usage

kubitect completion [command]\n

Commands

  • bash \u2003 Generate the autocompletion script for bash.
  • fish \u2003 Generate the autocompletion script for fish.
  • zsh \u2003 Generate the autocompletion script for zsh.

Tip

Run kubitect completion shell -h for instructions how to add autocompletion for a specific shell.

"},{"location":"user-guide/reference/cli/#kubitect-help","title":"kubitect help","text":"

Help provides help for any command in the application. Simply type kubitect help [path to command] for full details.

Usage

kubitect help [command]\n

or

kubitect [command] -h\n
"},{"location":"user-guide/reference/cli/#other","title":"Other","text":""},{"location":"user-guide/reference/cli/#version-flag","title":"Version flag","text":"

Print Kubitect CLI tool version.

Usage

kubitect --version\n

or

kubitect -v\n
"},{"location":"user-guide/reference/cli/#debug-flag","title":"Debug flag","text":"

Enable debug messages. This can be especially handy with the apply command.

Usage

kubitect [command] --debug\n
"},{"location":"user-guide/reference/configuration/","title":"Configuration reference","text":"

This document contains a reference of the Kubitect configuration file and documents all possible configuration properties.

The configuration sections are as follows:

  • hosts - A list of physical hosts (local or remote).
  • cluster - Configuration of the cluster infrastructure. Virtual machine properties, node types to install, and the host on which to install the nodes.
  • kubernetes - Kubernetes configuration.
  • addons - Configurable addons and applications.

Each configuration property is documented with 5 columns: Property name, description, type, default value and is the property required.

Note

[*] annotates an array.

"},{"location":"user-guide/reference/configuration/#configuration-reference","title":"Configuration reference","text":""},{"location":"user-guide/reference/configuration/#hosts-section","title":"Hosts section","text":"Name Type Default value Required? Description hosts[*].connection.ip string Yes, if connection.type is set to remote IP address is used to SSH into the remote machine. hosts[*].connection.ssh.keyfile string ~/.ssh/id_rsa Path to the keyfile that is used to SSH into the remote machine hosts[*].connection.ssh.port number 22 The port number of SSH protocol for remote machine. hosts[*].connection.ssh.verify boolean false If true, the SSH host is verified, which means that the host must be present in the known SSH hosts. hosts[*].connection.type string Yes Possible values are:
  • local or localhost
  • remote
hosts[*].connection.user string Yes, if connection.type is set to remote Username is used to SSH into the remote machine. hosts[*].dataResourcePools[*].name string Name of the data resource pool. Must be unique within the same host. It is used to link virtual machine volumes to the specific resource pool. hosts[*].dataResourcePools[*].path string /var/lib/libvirt/images/ Host path to the location where data resource pool is created. hosts[*].default boolean false Nodes where host is not specified will be installed on default host. The first host in the list is used as a default host if none is marked as a default. hosts[*].name string Yes Custom server name used to link nodes with physical hosts. hosts[*].mainResourcePoolPath string /var/lib/libvirt/images/ Path to the resource pool used for main virtual machine volumes."},{"location":"user-guide/reference/configuration/#cluster-section","title":"Cluster section","text":"Name Type Default value Required? Description cluster.name string Yes Custom cluster name that is used as a prefix for various cluster components. Note: cluster name cannot contain prefix local. cluster.network.bridge string virbr0 By default virbr0 is set as a name of virtual bridge. In case network mode is set to bridge, name of the preconfigured bridge needs to be set here. cluster.network.cidr string Yes Network cidr that contains network IP with network mask bits (IPv4/mask_bits). cluster.network.gateway string First client IP in network. By default first client IP is taken as a gateway. If network cidr is set to 10.0.0.0/24 then gateway would be 10.0.0.1. Set gateway if it differs from default value. cluster.network.mode string Yes Network mode. Possible values are:
  • nat - Creates virtual local network.
  • bridge - Uses preconfigured bridge interface on the machine (Only bridge mode supports multiple hosts).
  • route - Creates virtual local network, but does not apply NAT.
cluster.nodes.loadBalancer.default.cpu number 2 Default number of vCPU allocated to a load balancer instance. cluster.nodes.loadBalancer.default.mainDiskSize number 32 Size of the main disk (in GiB) that is attached to a load balancer instance. cluster.nodes.loadBalancer.default.ram number 4 Default amount of RAM (in GiB) allocated to a load balancer instance. cluster.nodes.loadBalancer.forwardPorts[*].name string Yes, if port is configured Unique name of the forwarded port. cluster.nodes.loadBalancer.forwardPorts[*].port number Yes, if port is configured Incoming port is the port on which a load balancer listens for the incoming traffic. cluster.nodes.loadBalancer.forwardPorts[*].targetPort number Incoming port value Target port is the port on which a load balancer forwards traffic. cluster.nodes.loadBalancer.forwardPorts[*].target string workers Target is a group of nodes on which a load balancer forwards traffic. Possible targets are:
  • masters
  • workers
  • all
cluster.nodes.loadBalancer.instances[*].cpu number Overrides a default value for that specific instance. cluster.nodes.loadBalancer.instances[*].host string Name of the host on which the instance is deployed. If the name is not specified, the instance is deployed on the default host. cluster.nodes.loadBalancer.instances[*].id string Yes Unique identifier of a load balancer instance. cluster.nodes.loadBalancer.instances[*].ip string If an IP is set for an instance then the instance will use it as a static IP. Otherwise it will try to request an IP from a DHCP server. cluster.nodes.loadBalancer.instances[*].mac string MAC used by the instance. If it is not set, it will be generated. cluster.nodes.loadBalancer.instances[*].mainDiskSize number Overrides a default value for that specific instance. cluster.nodes.loadBalancer.instances[*].priority number 10 Keepalived priority of the load balancer. A load balancer with the highest priority becomes the leader (active). The priority can be set to any number between 0 and 255. cluster.nodes.loadBalancer.instances[*].ram number Overrides a default value for the RAM for that instance. cluster.nodes.loadBalancer.vip string Yes, if more then one instance of load balancer is specified. Virtual IP (floating IP) is the static IP used by load balancers to provide a fail-over. Each load balancer still has its own IP beside the shared one. cluster.nodes.loadBalancer.virtualRouterId number 51 Virtual router ID identifies the group of VRRP routers. It can be any number between 0 and 255 and should be unique among different clusters. cluster.nodes.master.default.cpu number 2 Default number of vCPU allocated to a master node. cluster.nodes.master.default.labels dictionary Array of default node labels that are applied to all master nodes. cluster.nodes.master.default.mainDiskSize number 32 Size of the main disk (in GiB) that is attached to a master node. cluster.nodes.master.default.ram number 4 Default amount of RAM (in GiB) allocated to a master node. cluster.nodes.master.default.taints list List of default node taints that are applied to all master nodes. cluster.nodes.master.instances[*].cpu number Overrides a default value for that specific instance. cluster.nodes.master.instances[*].dataDisks[*].name string Name of the additional data disk that is attached to the master node. cluster.nodes.master.instances[*].dataDisks[*].pool string main Name of the data resource pool where the additional data disk is created. Referenced resource pool must be configure on the same host. cluster.nodes.master.instances[*].dataDisks[*].size string Size of the additional data disk (in GiB) that is attached to the master node. cluster.nodes.master.instances[*].host string Name of the host on which the instance is deployed. If the name is not specified, the instance is deployed on the default host. cluster.nodes.master.instances[*].id string Yes Unique identifier of a master node. cluster.nodes.master.instances[*].ip string If an IP is set for an instance then the instance will use it as a static IP. Otherwise it will try to request an IP from a DHCP server. cluster.nodes.master.instances[*].labels dictionary Array of node labels that are applied to this specific master node. cluster.nodes.master.instances[*].mac string MAC used by the instance. If it is not set, it will be generated. cluster.nodes.master.instances[*].mainDiskSize number Overrides a default value for that specific instance. cluster.nodes.master.instances[*].ram number Overrides a default value for the RAM for that instance. cluster.nodes.master.instances[*].taints list List of node taints that are applied to this specific master node. cluster.nodes.worker.default.cpu number 2 Default number of vCPU allocated to a worker node. cluster.nodes.worker.default.labels dictionary Array of default node labels that are applied to all worker nodes. cluster.nodes.worker.default.mainDiskSize number 32 Size of the main disk (in GiB) that is attached to a worker node. cluster.nodes.worker.default.ram number 4 Default amount of RAM (in GiB) allocated to a worker node. cluster.nodes.worker.default.taints list List of default node taints that are applied to all worker nodes. cluster.nodes.worker.instances[*].cpu number Overrides a default value for that specific instance. cluster.nodes.worker.instances[*].dataDisks[*].name string Name of the additional data disk that is attached to the worker node. cluster.nodes.worker.instances[*].dataDisks[*].pool string main Name of the data resource pool where the additional data disk is created. Referenced resource pool must be configure on the same host. cluster.nodes.worker.instances[*].dataDisks[*].size string Size of the additional data disk (in GiB) that is attached to the worker node. cluster.nodes.worker.instances[*].host string Name of the host on which the instance is deployed. If the name is not specified, the instance is deployed on the default host. cluster.nodes.worker.instances[*].id string Yes Unique identifier of a worker node. cluster.nodes.worker.instances[*].ip string If an IP is set for an instance then the instance will use it as a static IP. Otherwise it will try to request an IP from a DHCP server. cluster.nodes.worker.instances[*].labels dictionary Array of node labels that are applied to this specific worker node. cluster.nodes.worker.instances[*].mac string MAC used by the instance. If it is not set, it will be generated. cluster.nodes.worker.instances[*].mainDiskSize number Overrides a default value for that specific instance. cluster.nodes.worker.instances[*].ram number Overrides a default value for the RAM for that instance. cluster.nodes.worker.instances[*].taints list List of node taints that are applied to this specific worker node. cluster.nodeTemplate.cpuMode string custom Guest virtual machine CPU mode. cluster.nodeTemplate.dns list Value of network.gateway Custom DNS list used by all created virtual machines. If none is provided, network gateway is used. cluster.nodeTemplate.os.distro string ubuntu Set OS distribution. Possible values are:
  • ubuntu
  • ubuntu22
  • ubuntu20
  • debian
  • debian11
  • rocky
  • rocky9
  • centos
  • centos9
cluster.nodeTemplate.os.networkInterface string Depends on os.distro Network interface used by virtual machines to connect to the network. Network interface is preconfigured for each OS image (usually ens3 or eth0). By default, the value from distro preset (/terraform/defaults.yaml) is set, but can be overwritten if needed. cluster.nodeTemplate.os.source string Depends on os.distro Source of an OS image. It can be either path on a local file system or an URL of the image. By default, the value from distro preset (/terraform/defaults.yaml)isset, but can be overwritten if needed. cluster.nodeTemplate.ssh.addToKnownHosts boolean false If set to true, each virtual machine will be added to the known hosts on the machine where the project is being run. Note that all machines will also be removed from known hosts when destroying the cluster. cluster.nodeTemplate.ssh.privateKeyPath string Path to private key that is later used to SSH into each virtual machine. On the same path with .pub prefix needs to be present public key. If this value is not set, SSH key will be generated in ./config/.ssh/ directory. cluster.nodeTemplate.updateOnBoot boolean true If set to true, the operating system will be updated when it boots. cluster.nodeTemplate.user string k8s User created on each virtual machine."},{"location":"user-guide/reference/configuration/#kubernetes-section","title":"Kubernetes section","text":"Name Type Default value Required? Description kubernetes.dnsMode string coredns DNS server used within a Kubernetes cluster. Possible values are:
  • coredns
kubernetes.networkPlugin string calico Network plugin used within a Kubernetes cluster. Possible values are:
  • calico
  • cilium
  • flannel
  • kube-router
  • weave
kubernetes.other.autoRenewCertificates boolean false When this property is set to true, control plane certificates are renewed first Monday of each month. kubernetes.other.copyKubeconfig boolean false When this property is set to true, the kubeconfig of a new cluster is copied to the ~/.kube/config. Please note that setting this property to true may cause the existing file at the destination to be overwritten. kubernetes.version string v1.27.5 Kubernetes version that will be installed."},{"location":"user-guide/reference/configuration/#addons-section","title":"Addons section","text":"Name Type Default value Required? Description addons.kubespray dictionary Kubespray addons configuration. addons.rook.enabled boolean false Enable Rook addon. addons.rook.nodeSelector dictionary Dictionary containing node labels (\"key: value\"). Rook is deployed on the nodes that match all the given labels. addons.rook.version string Rook version. By default, the latest release version is used."}]} \ No newline at end of file diff --git a/master/sitemap.xml.gz b/master/sitemap.xml.gz index 0d07a2f4a093d7d6a47e65c99eafee31115d551f..7816d0acc7e014aed0eae1a6ef2cf9fc4edda380 100644 GIT binary patch delta 12 Tcmb=gXOr*d;JCPVB3mT@8p#A* delta 12 Tcmb=gXOr*d;Nadfk*yK{7Uu(&