From efdfac80675be8faefa49720c8d3d8418ca7bf2d Mon Sep 17 00:00:00 2001 From: github-actions Date: Tue, 12 Nov 2024 10:57:46 +0000 Subject: [PATCH] Deployed 1f7ac12 to main with MkDocs 1.6.1 and mike 2.1.3 --- main/404.html | 2 +- main/examples/accessing-cluster/index.html | 2 +- main/examples/full-example/index.html | 2 +- main/examples/ha-cluster/index.html | 2 +- main/examples/multi-master-cluster/index.html | 2 +- main/examples/multi-worker-cluster/index.html | 2 +- main/examples/network-bridge/index.html | 2 +- main/examples/rook-cluster/index.html | 2 +- main/examples/single-node-cluster/index.html | 2 +- main/getting-started/getting-started/index.html | 4 ++-- main/getting-started/installation/index.html | 2 +- main/getting-started/introduction/index.html | 2 +- .../other/local-development/index.html | 2 +- .../other/troubleshooting/index.html | 2 +- main/getting-started/quick-start/index.html | 4 ++-- main/getting-started/requirements/index.html | 2 +- main/index.html | 2 +- main/sitemap.xml.gz | Bin 127 -> 127 bytes main/user-guide/before-you-begin/index.html | 2 +- main/user-guide/configuration/addons/index.html | 2 +- .../configuration/cluster-name/index.html | 2 +- .../configuration/cluster-network/index.html | 2 +- .../cluster-node-template/index.html | 2 +- .../configuration/cluster-nodes/index.html | 2 +- main/user-guide/configuration/hosts/index.html | 2 +- .../configuration/kubernetes/index.html | 2 +- .../user-guide/management/destroying/index.html | 2 +- main/user-guide/management/scaling/index.html | 2 +- main/user-guide/management/upgrading/index.html | 2 +- main/user-guide/reference/cli/index.html | 2 +- .../reference/configuration/index.html | 2 +- 31 files changed, 32 insertions(+), 32 deletions(-) diff --git a/main/404.html b/main/404.html index 4c1f0c4d..cef51c99 100644 --- a/main/404.html +++ b/main/404.html @@ -1 +1 @@ - Kubitect

404 - Not found

\ No newline at end of file + Kubitect

404 - Not found

\ No newline at end of file diff --git a/main/examples/accessing-cluster/index.html b/main/examples/accessing-cluster/index.html index 843cf3cc..f1bbb3f9 100644 --- a/main/examples/accessing-cluster/index.html +++ b/main/examples/accessing-cluster/index.html @@ -1,4 +1,4 @@ - Accessing the cluster - Kubitect
Skip to content

Accessing the cluster🔗︎

Cloud providers that support Kubernetes clusters typically provide load balancer provisioning on demand. By setting a Service type to LoadBalancer, an external load balancer is automatically provisioned with its own unique IP address. This load balancer redirects all incoming connections to the Service, as illustrated in the figure below.

Cloud provider load balancer scheme

In on-premise environments, there is no load balancer that can be provisioned on demand. Therefore, some alternative solutions are explained in this document.

Node ports🔗︎

Setting Service type to NodePort makes Kubernetes reserve a port on all its nodes. As a result, the Service becomes available on <NodeIP>:<NodePort>, as shown in the figure below.

Node port Service access scheme

When using NodePort, it does not matter to which node a client sends the request, since it is routed internally to the appropriate Pod. However, if all traffic is directed to a single node, its failure will make the Service unavailable.

Self-provisioned edge🔗︎

With Kubitect, it is possible to configure the port forwarding of the load balancer to distribute incoming requests to multiple nodes in the cluster, as shown in the figure below.

Node port Service access scheme

To set up load balancer port forwarding, at least one load balancer must be configured. The following example shows how to set up load balancer port forwarding for ports 80 (HTTP) and 443 (HTTPS).

cluster:
+ Accessing the cluster - Kubitect      

Accessing the cluster🔗︎

Cloud providers that support Kubernetes clusters typically provide load balancer provisioning on demand. By setting a Service type to LoadBalancer, an external load balancer is automatically provisioned with its own unique IP address. This load balancer redirects all incoming connections to the Service, as illustrated in the figure below.

Cloud provider load balancer scheme

In on-premise environments, there is no load balancer that can be provisioned on demand. Therefore, some alternative solutions are explained in this document.

Node ports🔗︎

Setting Service type to NodePort makes Kubernetes reserve a port on all its nodes. As a result, the Service becomes available on <NodeIP>:<NodePort>, as shown in the figure below.

Node port Service access scheme

When using NodePort, it does not matter to which node a client sends the request, since it is routed internally to the appropriate Pod. However, if all traffic is directed to a single node, its failure will make the Service unavailable.

Self-provisioned edge🔗︎

With Kubitect, it is possible to configure the port forwarding of the load balancer to distribute incoming requests to multiple nodes in the cluster, as shown in the figure below.

Node port Service access scheme

To set up load balancer port forwarding, at least one load balancer must be configured. The following example shows how to set up load balancer port forwarding for ports 80 (HTTP) and 443 (HTTPS).

cluster:
   nodes:
     loadBalancer:
       forwardPorts:
diff --git a/main/examples/full-example/index.html b/main/examples/full-example/index.html
index 25e9b024..f2788c10 100644
--- a/main/examples/full-example/index.html
+++ b/main/examples/full-example/index.html
@@ -1,4 +1,4 @@
- Full example - Kubitect      

Full (detailed) example🔗︎

This document contains an example of Kubitect configuration. Example covers all (or most) of the Kubitect properties. This example is meant for users that learn the fastest from an example configuration.

#
+ Full example - Kubitect      

Full (detailed) example🔗︎

This document contains an example of Kubitect configuration. Example covers all (or most) of the Kubitect properties. This example is meant for users that learn the fastest from an example configuration.

#
 # The 'hosts' section contains data about the physical servers on which the
 # Kubernetes cluster will be installed.
 #
diff --git a/main/examples/ha-cluster/index.html b/main/examples/ha-cluster/index.html
index 454dd175..d7c6932e 100644
--- a/main/examples/ha-cluster/index.html
+++ b/main/examples/ha-cluster/index.html
@@ -1,4 +1,4 @@
- Highly available (HA) cluster - Kubitect      

Highly available cluster🔗︎

This example demonstrates how to use Kubitect to create a highly available Kubernetes cluster that spans across five hosts. This topology offers redundancy in case of node or host failures.

The final topology of the deployed Kubernetes cluster is shown in the figure below.

Architecture of the highly available cluster

Step 1: Hosts configuration🔗︎

This example involves the deployment of a Kubernetes cluster on five remote physical hosts. The local network subnet used in this setup is 10.10.0.0/20, with the gateway IP address set to 10.10.0.1. All hosts are connected to the same local network and feature a pre-configured bridge interface, named br0.

Tip

This example uses preconfigured bridges on each host to expose nodes on the local network.

Network bridge example shows how to configure a bridge interface using Netplan.

Furthermore, we have configured a user named kubitect on each host, which can be accessed through SSH using the same certificate stored on our local machine without the need for a password. The certificate is located at ~/.ssh/id_rsa_ha.

To deploy the Kubernetes cluster, each host's details must be specified in the Kubitect configuration file. In this case, the host configurations differ only in the host's name and IP address.

ha.yaml
hosts:
+ Highly available (HA) cluster - Kubitect      

Highly available cluster🔗︎

This example demonstrates how to use Kubitect to create a highly available Kubernetes cluster that spans across five hosts. This topology offers redundancy in case of node or host failures.

The final topology of the deployed Kubernetes cluster is shown in the figure below.

Architecture of the highly available cluster

Step 1: Hosts configuration🔗︎

This example involves the deployment of a Kubernetes cluster on five remote physical hosts. The local network subnet used in this setup is 10.10.0.0/20, with the gateway IP address set to 10.10.0.1. All hosts are connected to the same local network and feature a pre-configured bridge interface, named br0.

Tip

This example uses preconfigured bridges on each host to expose nodes on the local network.

Network bridge example shows how to configure a bridge interface using Netplan.

Furthermore, we have configured a user named kubitect on each host, which can be accessed through SSH using the same certificate stored on our local machine without the need for a password. The certificate is located at ~/.ssh/id_rsa_ha.

To deploy the Kubernetes cluster, each host's details must be specified in the Kubitect configuration file. In this case, the host configurations differ only in the host's name and IP address.

ha.yaml
hosts:
   - name: host1
     connection:
       type: remote
diff --git a/main/examples/multi-master-cluster/index.html b/main/examples/multi-master-cluster/index.html
index 948e92a0..65b5f5ec 100644
--- a/main/examples/multi-master-cluster/index.html
+++ b/main/examples/multi-master-cluster/index.html
@@ -1,4 +1,4 @@
- Multi-master cluster - Kubitect      

Multi-master cluster🔗︎

This example demonstrates how to use Kubitect to set up a Kubernetes cluster with 3 master and 3 worker nodes.

By configuring multiple master nodes, the control plane remains to operate normally even if some master nodes fail. Since Kubitect deploys clusters with a stacked control plane, the redundancy is ensured as long as there are at least (n/2)+1 master nodes available.

The final topology of the deployed Kubernetes cluster is depicted in the figure below.

Architecture of the cluster with 3 master and 3 worker nodes

Note

This example skips the explanation of some common configurations such as hosts, network, and node template, as they are already covered in detail in the Getting started (step-by-step) guide.

Preset available

To export the preset configuration, run: kubitect export preset example-multi-master

Step 1: Cluster configuration🔗︎

When deploying a multiple master Kubernetes cluster using Kubitect, it is necessary to configure at least one load balancer. The load balancer is responsible for distributing traffic evenly across the control plane nodes. In the event of a particular master node failure, the load balancer automatically detects the unhealthy node and routes traffic only to the remaining healthy nodes, ensuring the continuous availability of the Kubernetes cluster.

The figure below provides a visual representation of this approach.

Scheme of load balancing between control plane nodes

To create such a cluster, all we need to do is specify the desired node instances and configure one load balancer. The control plane will be accessible through the load balancer's IP address.

multi-master.yaml
cluster:
+ Multi-master cluster - Kubitect      

Multi-master cluster🔗︎

This example demonstrates how to use Kubitect to set up a Kubernetes cluster with 3 master and 3 worker nodes.

By configuring multiple master nodes, the control plane remains to operate normally even if some master nodes fail. Since Kubitect deploys clusters with a stacked control plane, the redundancy is ensured as long as there are at least (n/2)+1 master nodes available.

The final topology of the deployed Kubernetes cluster is depicted in the figure below.

Architecture of the cluster with 3 master and 3 worker nodes

Note

This example skips the explanation of some common configurations such as hosts, network, and node template, as they are already covered in detail in the Getting started (step-by-step) guide.

Preset available

To export the preset configuration, run: kubitect export preset example-multi-master

Step 1: Cluster configuration🔗︎

When deploying a multiple master Kubernetes cluster using Kubitect, it is necessary to configure at least one load balancer. The load balancer is responsible for distributing traffic evenly across the control plane nodes. In the event of a particular master node failure, the load balancer automatically detects the unhealthy node and routes traffic only to the remaining healthy nodes, ensuring the continuous availability of the Kubernetes cluster.

The figure below provides a visual representation of this approach.

Scheme of load balancing between control plane nodes

To create such a cluster, all we need to do is specify the desired node instances and configure one load balancer. The control plane will be accessible through the load balancer's IP address.

multi-master.yaml
cluster:
   ...
   nodes:
     loadBalancer:
diff --git a/main/examples/multi-worker-cluster/index.html b/main/examples/multi-worker-cluster/index.html
index df666c42..db8a3a51 100644
--- a/main/examples/multi-worker-cluster/index.html
+++ b/main/examples/multi-worker-cluster/index.html
@@ -1,4 +1,4 @@
- Multi-worker cluster - Kubitect      

Multi-worker cluster🔗︎

This example demonstrates how to use Kubitect to set up a Kubernetes cluster consisting of one master and three worker nodes. The final topology of the deployed Kubernetes cluster is shown in the figure below.

Architecture of the cluster with 1 master and 3 worker nodes

Note

This example skips the explanation of some common configurations such as hosts, network, and node template, as they are already covered in detail in the Getting started (step-by-step) guide.

Preset available

To export the preset configuration, run: kubitect export preset example-multi-worker

Step 1: Cluster configuration🔗︎

You can easily create a cluster with multiple worker nodes by specifying them in the configuration file. For this example, we have included three worker nodes, but you can add as many as you like to suit your needs.

multi-worker.yaml
cluster:
+ Multi-worker cluster - Kubitect      

Multi-worker cluster🔗︎

This example demonstrates how to use Kubitect to set up a Kubernetes cluster consisting of one master and three worker nodes. The final topology of the deployed Kubernetes cluster is shown in the figure below.

Architecture of the cluster with 1 master and 3 worker nodes

Note

This example skips the explanation of some common configurations such as hosts, network, and node template, as they are already covered in detail in the Getting started (step-by-step) guide.

Preset available

To export the preset configuration, run: kubitect export preset example-multi-worker

Step 1: Cluster configuration🔗︎

You can easily create a cluster with multiple worker nodes by specifying them in the configuration file. For this example, we have included three worker nodes, but you can add as many as you like to suit your needs.

multi-worker.yaml
cluster:
   ...
   nodes:
     master:
diff --git a/main/examples/network-bridge/index.html b/main/examples/network-bridge/index.html
index 489bf7cb..e583594d 100644
--- a/main/examples/network-bridge/index.html
+++ b/main/examples/network-bridge/index.html
@@ -1,4 +1,4 @@
- Network bridge - Kubitect      

Network bridge🔗︎

Bridged networks allow virtual machines to connect directly to the LAN. To use Kubitect with bridged network mode, a bridge interface must be preconfigured on the host machine. This example shows how to configure a simple bridge interface using Netplan.

NAT vs bridge network scheme

Step 1 - (Pre)configure the bridge on the host🔗︎

Before the network bridge can be created, a name of the host's network interface is required. This interface will be used by the bridge.

To print the available network interfaces of the host, use the following command.

nmcli device | grep ethernet
+ Network bridge - Kubitect      

Network bridge🔗︎

Bridged networks allow virtual machines to connect directly to the LAN. To use Kubitect with bridged network mode, a bridge interface must be preconfigured on the host machine. This example shows how to configure a simple bridge interface using Netplan.

NAT vs bridge network scheme

Step 1 - (Pre)configure the bridge on the host🔗︎

Before the network bridge can be created, a name of the host's network interface is required. This interface will be used by the bridge.

To print the available network interfaces of the host, use the following command.

nmcli device | grep ethernet
 

Similarly to the previous command, network interfaces can be printed using ifconfig or ip commands. Note that these commands output all interfaces, including virtual ones.

ifconfig -a
 # or
 ip a
diff --git a/main/examples/rook-cluster/index.html b/main/examples/rook-cluster/index.html
index 3c4ce1f3..631112c6 100644
--- a/main/examples/rook-cluster/index.html
+++ b/main/examples/rook-cluster/index.html
@@ -1,4 +1,4 @@
- Rook cluster - Kubitect      

Rook cluster🔗︎

This example shows how to use Kubitect to set up distributed storage with Rook. For distributed storage, we add an additional data disk to each virtual machine as shown on the figure below.

This example demonstrates how to set up distributed storage with Rook. To achieve distributed storage, we add an additional data disk to each virtual machine, as depicted in the figure below. This additional data disk is utilized by Rook to provide reliable and scalable distributed storage solutions for the Kubernetes cluster.

Basic Rook cluster scheme

Basic setup🔗︎

Step 1: Define data resource pool🔗︎

To configure distributed storage with Rook, the data disks must be attached to the virtual machines. By default, each data disk is created in the main resource pool. However, it is also possible to configure additional resource pools and associate data disks with them later, depending on your requirements.

In this example, we define an additional resource pool named rook-pool.

rook-sample.yaml
hosts:
+ Rook cluster - Kubitect      

Rook cluster🔗︎

This example shows how to use Kubitect to set up distributed storage with Rook. For distributed storage, we add an additional data disk to each virtual machine as shown on the figure below.

This example demonstrates how to set up distributed storage with Rook. To achieve distributed storage, we add an additional data disk to each virtual machine, as depicted in the figure below. This additional data disk is utilized by Rook to provide reliable and scalable distributed storage solutions for the Kubernetes cluster.

Basic Rook cluster scheme

Basic setup🔗︎

Step 1: Define data resource pool🔗︎

To configure distributed storage with Rook, the data disks must be attached to the virtual machines. By default, each data disk is created in the main resource pool. However, it is also possible to configure additional resource pools and associate data disks with them later, depending on your requirements.

In this example, we define an additional resource pool named rook-pool.

rook-sample.yaml
hosts:
   - name: localhost
     connection:
       type: local
diff --git a/main/examples/single-node-cluster/index.html b/main/examples/single-node-cluster/index.html
index e360c7ee..0cab14ab 100644
--- a/main/examples/single-node-cluster/index.html
+++ b/main/examples/single-node-cluster/index.html
@@ -1,4 +1,4 @@
- Single node cluster - Kubitect      

Single node cluster🔗︎

This example demonstrates how to set up a single-node Kubernetes cluster using Kubitect. In a single-node cluster, only one master node needs to be configured. The topology of the Kubernetes cluster deployed in this guide is shown below.

Architecture of a single node cluster

Note

This example skips the explanation of some common configurations such as hosts, network, and node template, as they are already covered in detail in the Getting started (step-by-step) guide.

Preset available

To export the preset configuration, run: kubitect export preset example-single-node

Step 1: Create the configuration🔗︎

To initialize a single-node Kubernetes cluster, you need to specify a single master node in the cluster configuration file.

single-node.yaml
cluster:
+ Single node cluster - Kubitect      

Single node cluster🔗︎

This example demonstrates how to set up a single-node Kubernetes cluster using Kubitect. In a single-node cluster, only one master node needs to be configured. The topology of the Kubernetes cluster deployed in this guide is shown below.

Architecture of a single node cluster

Note

This example skips the explanation of some common configurations such as hosts, network, and node template, as they are already covered in detail in the Getting started (step-by-step) guide.

Preset available

To export the preset configuration, run: kubitect export preset example-single-node

Step 1: Create the configuration🔗︎

To initialize a single-node Kubernetes cluster, you need to specify a single master node in the cluster configuration file.

single-node.yaml
cluster:
   ...
   nodes:
     master:
diff --git a/main/getting-started/getting-started/index.html b/main/getting-started/getting-started/index.html
index fc8109f3..03c7ca99 100644
--- a/main/getting-started/getting-started/index.html
+++ b/main/getting-started/getting-started/index.html
@@ -1,4 +1,4 @@
- Getting started (step-by-step) - Kubitect      

Getting Started🔗︎

In the quick start guide, we learned how to create a Kubernetes cluster using a preset configuration. Now, we will explore how to create a customized cluster topology that meets your specific requirements.

This step-by-step guide will walk you through the process of creating a custom cluster configuration file from scratch and using it to create a functional Kubernetes cluster with one master and one worker node. By following the steps outlined in this guide, you will have a Kubernetes cluster up and running in no time.

Base scheme of the cluster with one master and one worker node

Step 1 - Ensure all requirements are met🔗︎

Before progressing with this guide, take a minute to ensure that all of the requirements are met. Afterwards, simply create a new YAML file and open it in a text editor of your choice.

Step 2 - Prepare hosts configuration🔗︎

In the cluster configuration file, the first step is to define hosts. Hosts represent target servers that can be either local or remote machines.

When setting up the cluster on your local host, where the command line tool is installed, be sure to specify a host with a connection type set to local.

kubitect.yaml
hosts:
+ Getting started (step-by-step) - Kubitect      

Getting Started🔗︎

In the quick start guide, we learned how to create a Kubernetes cluster using a preset configuration. Now, we will explore how to create a customized cluster topology that meets your specific requirements.

This step-by-step guide will walk you through the process of creating a custom cluster configuration file from scratch and using it to create a functional Kubernetes cluster with one master and one worker node. By following the steps outlined in this guide, you will have a Kubernetes cluster up and running in no time.

Base scheme of the cluster with one master and one worker node

Step 1 - Ensure all requirements are met🔗︎

Before progressing with this guide, take a minute to ensure that all of the requirements are met. Afterwards, simply create a new YAML file and open it in a text editor of your choice.

Step 2 - Prepare hosts configuration🔗︎

In the cluster configuration file, the first step is to define hosts. Hosts represent target servers that can be either local or remote machines.

When setting up the cluster on your local host, where the command line tool is installed, be sure to specify a host with a connection type set to local.

kubitect.yaml
hosts:
   - name: localhost # (1)!
     connection:
       type: local
@@ -115,4 +115,4 @@
 #   - my-cluster (active)
 

Step 5 - Test the cluster🔗︎

Finally, to confirm that the cluster is ready, you can list its nodes using the kubectl command:

kubectl --context k8s-cluster get nodes
 
Where do I find kubeconfig?

Once the Kubernetes cluster is deployed, the Kubeconfig file can be found in the cluster's directory.

You can easily export the Kubeconfig into a separate file using the following command, which creates a file named kubeconfig.yaml in your current directory.

kubitect export kubeconfig --cluster k8s-cluster > kubeconfig.yaml
-

Kubeconfig can be also automatically merged into existing ~/.kube/config when a cluster is created by setting property mergeKubeconfig to true in the cluster's configuration file.

👏 Congratulations, you have completed the getting started quide.

\ No newline at end of file +

Kubeconfig can be also automatically merged into existing ~/.kube/config when a cluster is created by setting property mergeKubeconfig to true in the cluster's configuration file.

👏 Congratulations, you have completed the getting started quide.

\ No newline at end of file diff --git a/main/getting-started/installation/index.html b/main/getting-started/installation/index.html index 4a4290a5..40b5db79 100644 --- a/main/getting-started/installation/index.html +++ b/main/getting-started/installation/index.html @@ -1,4 +1,4 @@ - Installation - Kubitect

Installation🔗︎

Install Kubitect CLI tool🔗︎

Download Kubitect binary file from the release page.

curl -o kubitect.tar.gz -L https://dl.kubitect.io/linux/amd64/latest
+ Installation - Kubitect      

Installation🔗︎

Install Kubitect CLI tool🔗︎

Download Kubitect binary file from the release page.

curl -o kubitect.tar.gz -L https://dl.kubitect.io/linux/amd64/latest
 

Unpack tar.gz file.

tar -xzf kubitect.tar.gz
 

Install Kubitect command line tool by placing the Kubitect binary file in /usr/local/bin directory.

sudo mv kubitect /usr/local/bin/
 

Note

The download URL is a combination of the operating system type, system architecture and version of Kubitect (https://dl.kubitect.io/<os>/<arch>/<version>).

All releases can be found on GitHub release page.

As many clusters on as many hosts

With Kubitect, you can set up a variety of cluster topologies, from single-node clusters to complex clusters running on multiple hosts.

Open source forever

Kubitect is based exclusively on open source technologies and will therefore remain open source forever!

Reliable, scalable and reproducible

By using a single configuration file, Kubitect clusters can be easily replicated in different environments, while Kubespray ensures reliability and scalability.

From home-labs to companies

Bridging the gap between local and enterprise environments, Kubitect enables you to set up a cluster for your home lab or a business without the need for in-depth knowledge.

\ No newline at end of file + Kubitect - Kubitect

Introduction

What is Kubitect?

Kubitect is an open source project that aims to simplify the deployment and subsequent management of Kubernetes clusters. It provides a CLI tool written in golang that lets you set up, update, scale, and destroy Kubernetes clusters.

Under the hood, it uses Terraform along with terraform-libvirt-provider to deploy virtual machines on target hosts running libvirt. Kubernetes is configured on the deployed virtual machines using Kubespray, the popular open source project.

Core goals

As many clusters on as many hosts

With Kubitect, you can set up a variety of cluster topologies, from single-node clusters to complex clusters running on multiple hosts.

Open source forever

Kubitect is based exclusively on open source technologies and will therefore remain open source forever!

Reliable, scalable and reproducible

By using a single configuration file, Kubitect clusters can be easily replicated in different environments, while Kubespray ensures reliability and scalability.

From home-labs to companies

Bridging the gap between local and enterprise environments, Kubitect enables you to set up a cluster for your home lab or a business without the need for in-depth knowledge.

\ No newline at end of file diff --git a/main/getting-started/other/local-development/index.html b/main/getting-started/other/local-development/index.html index 0fa09b88..2eee4e84 100644 --- a/main/getting-started/other/local-development/index.html +++ b/main/getting-started/other/local-development/index.html @@ -1,4 +1,4 @@ - Local development - Kubitect

Local development🔗︎

This document shows how to build a CLI tool manually and how to use the project without creating any files outside the project's directory.

Prerequisites🔗︎

Step 1: Clone the project🔗︎

First, clone the project.

git clone https://github.com/MusicDin/kubitect
+ Local development - Kubitect      

Local development🔗︎

This document shows how to build a CLI tool manually and how to use the project without creating any files outside the project's directory.

Prerequisites🔗︎

Step 1: Clone the project🔗︎

First, clone the project.

git clone https://github.com/MusicDin/kubitect
 

Afterwards, move into the cloned project.

cd kubitect
 

Step 2: Build Kubitect CLI tool🔗︎

The Kubitect CLI tool can be manually built using Go. Running the following command will produce a kubitect binary file.

go build -o kubitect ./cmd
 

To make the binary file globally accessible, move it to the /usr/local/bin/ directory.

sudo mv kubitect /usr/local/bin/kubitect
diff --git a/main/getting-started/other/troubleshooting/index.html b/main/getting-started/other/troubleshooting/index.html
index 0158492c..4103efe4 100644
--- a/main/getting-started/other/troubleshooting/index.html
+++ b/main/getting-started/other/troubleshooting/index.html
@@ -1,4 +1,4 @@
- Troubleshooting - Kubitect      

Troubleshooting🔗︎

Is your issue not listed here?

If the troubleshooting page is missing an error you encountered, please report it on GitHub by opening an issue. By doing so, you will help improve the project and help others find the solution to the same problem faster.

General errors🔗︎

Virtualenv not found🔗︎

Error

Output: /bin/sh: 1: virtualenv: not found

/bin/sh: 2: ansible-playbook: not found

Explanation

The error indicates that the virtualenv is not installed.

Solution

There are many ways to install virtualenv. For all installation options you can refere to their official documentation - Virtualenv installation.

For example, virtualenv can be installed using pip.

First install pip.

sudo apt install python3-pip
+ Troubleshooting - Kubitect      

Troubleshooting🔗︎

Is your issue not listed here?

If the troubleshooting page is missing an error you encountered, please report it on GitHub by opening an issue. By doing so, you will help improve the project and help others find the solution to the same problem faster.

General errors🔗︎

Virtualenv not found🔗︎

Error

Output: /bin/sh: 1: virtualenv: not found

/bin/sh: 2: ansible-playbook: not found

Explanation

The error indicates that the virtualenv is not installed.

Solution

There are many ways to install virtualenv. For all installation options you can refere to their official documentation - Virtualenv installation.

For example, virtualenv can be installed using pip.

First install pip.

sudo apt install python3-pip
 

Then install virtualenv using pip3.

pip3 install virtualenv
 

KVM/Libvirt errors🔗︎

Failed to connect socket (No such file or directory)🔗︎

Error

Error: virError(Code=38, Domain=7, Message='Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory')

Explanation

The problem may occur when libvirt is not started.

Solution

Make sure that the libvirt service is running:

sudo systemctl status libvirtd
 

If the libvirt service is not running, start it:

sudo systemctl start libvirtd
diff --git a/main/getting-started/quick-start/index.html b/main/getting-started/quick-start/index.html
index 421ff0bc..12f62a8d 100644
--- a/main/getting-started/quick-start/index.html
+++ b/main/getting-started/quick-start/index.html
@@ -1,4 +1,4 @@
- Quick start - Kubitect      

Quick start🔗︎

In this quick guide, we will show you how to use the Kubitect command line tool to quickly deploy a simple Kubernetes cluster.

To get started, you will need to apply a cluster configuration file to the Kubitect command line tool. You can either prepare this file manually, as explained in our Getting started guide, or use one of the available presets.

For the purposes of this quick start guide, we will be using a getting-started preset, which defines a cluster with one master and one worker node. The resulting infrastructure is shown in the image below.

Architecture of the cluster with one master and one worker node

Step 1 - Create a Kubernetes cluster🔗︎

Export the gettings-started preset:

kubitect export preset --name getting-started > cluster.yaml
+ Quick start - Kubitect      

Quick start🔗︎

In this quick guide, we will show you how to use the Kubitect command line tool to quickly deploy a simple Kubernetes cluster.

To get started, you will need to apply a cluster configuration file to the Kubitect command line tool. You can either prepare this file manually, as explained in our Getting started guide, or use one of the available presets.

For the purposes of this quick start guide, we will be using a getting-started preset, which defines a cluster with one master and one worker node. The resulting infrastructure is shown in the image below.

Architecture of the cluster with one master and one worker node

Step 1 - Create a Kubernetes cluster🔗︎

Export the gettings-started preset:

kubitect export preset --name getting-started > cluster.yaml
 

Then, apply the exported configuration file to the Kubitect:

kubitect apply --config cluster.yaml
 

That's it! The cluster, named k8s-cluster, should be up and running in approximately 10 minutes.

Step 2 - Test the cluster🔗︎

To test that the cluster is up and running, display all cluster nodes using the exported Kubeconfig and the kubectl command:

kubectl --context k8s-cluster get nodes
-

👏 Congratulations, you have successfully deployed a Kubernetes cluster using Kubitect!

\ No newline at end of file +

👏 Congratulations, you have successfully deployed a Kubernetes cluster using Kubitect!

\ No newline at end of file diff --git a/main/getting-started/requirements/index.html b/main/getting-started/requirements/index.html index 1056037a..bdfd0de8 100644 --- a/main/getting-started/requirements/index.html +++ b/main/getting-started/requirements/index.html @@ -1,2 +1,2 @@ - Requirements - Kubitect

Requirements🔗︎

On the local host (where Kubitect command-line tool is installed), the following requirements must be met:

Git

Python >= 3.8

Python virtualenv

Password-less SSH key for each remote host


On hosts where a Kubernetes cluster will be deployed using Kubitect, the following requirements must be met:

A libvirt virtualization API

A running hypervisor that is supported by libvirt (e.g. KVM)

How to install KVM?

To install the KVM (Kernel-based Virtual Machine) hypervisor and libvirt, use apt or yum to install the following packages:

  • qemu-kvm
  • libvirt-clients
  • libvirt-daemon
  • libvirt-daemon-system

After the installation, add your user to the kvm group in order to access the kvm device:

sudo usermod -aG kvm $USER
+ Requirements - Kubitect      

Requirements🔗︎

On the local host (where Kubitect command-line tool is installed), the following requirements must be met:

Git

Python >= 3.8

Python virtualenv

Password-less SSH key for each remote host


On hosts where a Kubernetes cluster will be deployed using Kubitect, the following requirements must be met:

A libvirt virtualization API

A running hypervisor that is supported by libvirt (e.g. KVM)

How to install KVM?

To install the KVM (Kernel-based Virtual Machine) hypervisor and libvirt, use apt or yum to install the following packages:

  • qemu-kvm
  • libvirt-clients
  • libvirt-daemon
  • libvirt-daemon-system

After the installation, add your user to the kvm group in order to access the kvm device:

sudo usermod -aG kvm $USER
 
\ No newline at end of file diff --git a/main/index.html b/main/index.html index 13113560..8c124680 100644 --- a/main/index.html +++ b/main/index.html @@ -1 +1 @@ - Kubitect - Kubitect
Kubitect
A coherent CLI tool for deploying Kubernetes clusters.
Get started
Create your first cluster
\ No newline at end of file + Kubitect - Kubitect
Kubitect
A coherent CLI tool for deploying Kubernetes clusters.
Get started
Create your first cluster
\ No newline at end of file diff --git a/main/sitemap.xml.gz b/main/sitemap.xml.gz index 3584e9f73c429ac8b67a6e1be4c652e32529eb92..18c6e92170c831ca88964ae7b1d2d1c9d1421052 100644 GIT binary patch delta 13 Ucmb=gXP58h;Aog-G?Bdm031~W8~^|S delta 13 Ucmb=gXP58h;9$5SH<7&p02(I)!T Before you begin - Kubitect

Before you begin🔗︎

The user guide is divided into three subsections: Cluster Management, Configuration and Reference. The Cluster Management subsection introduces the operations that can be performed over the cluster. The Configuration subsection contains explanations of the configurable Kubitect properties. Finally, the Reference subsection contains a configuration and CLI reference.

The following symbol conventions are used throughout the user guide:

  • - Indicates the Kubitect version in which the property was either added or last modified.
  • - Indicates that the property is required in every valid configuration.
  • - Indicates the default value of the property.
  • - Indicates that the feature or property is experimental (not yet stable). This means that its implementation may change drastically over time and that its activation may lead to unexpected behavior.
\ No newline at end of file + Before you begin - Kubitect

Before you begin🔗︎

The user guide is divided into three subsections: Cluster Management, Configuration and Reference. The Cluster Management subsection introduces the operations that can be performed over the cluster. The Configuration subsection contains explanations of the configurable Kubitect properties. Finally, the Reference subsection contains a configuration and CLI reference.

The following symbol conventions are used throughout the user guide:

  • - Indicates the Kubitect version in which the property was either added or last modified.
  • - Indicates that the property is required in every valid configuration.
  • - Indicates the default value of the property.
  • - Indicates that the feature or property is experimental (not yet stable). This means that its implementation may change drastically over time and that its activation may lead to unexpected behavior.
\ No newline at end of file diff --git a/main/user-guide/configuration/addons/index.html b/main/user-guide/configuration/addons/index.html index cda0eceb..3b4cb989 100644 --- a/main/user-guide/configuration/addons/index.html +++ b/main/user-guide/configuration/addons/index.html @@ -1,4 +1,4 @@ - Addons - Kubitect

Addons🔗︎

Configuration🔗︎

Kubespray addons🔗︎

v2.1.0

Kubespray provides a variety of configurable addons to enhance the functionality of Kubernetes. Some popular addons include the Ingress-NGINX controller and MetalLB.

Kubespray addons can be configured under the addons.kubespray property. It's important to note that the Kubespray addons are configured in the same as they would be for Kubespray itself, as Kubitect copies the provided configuration into Kubespray's group variables during cluster creation.

The full range of available addons can be explored in the Kubespray addons sample, which is available on GitHub. Most addons are also documented in the official Kubespray documentation.

addons:
+ Addons - Kubitect      

Addons🔗︎

Configuration🔗︎

Kubespray addons🔗︎

v2.1.0

Kubespray provides a variety of configurable addons to enhance the functionality of Kubernetes. Some popular addons include the Ingress-NGINX controller and MetalLB.

Kubespray addons can be configured under the addons.kubespray property. It's important to note that the Kubespray addons are configured in the same as they would be for Kubespray itself, as Kubitect copies the provided configuration into Kubespray's group variables during cluster creation.

The full range of available addons can be explored in the Kubespray addons sample, which is available on GitHub. Most addons are also documented in the official Kubespray documentation.

addons:
   kubespray:
 
     # Nginx ingress controller deployment
diff --git a/main/user-guide/configuration/cluster-name/index.html b/main/user-guide/configuration/cluster-name/index.html
index f37f8ce5..945120af 100644
--- a/main/user-guide/configuration/cluster-name/index.html
+++ b/main/user-guide/configuration/cluster-name/index.html
@@ -1,3 +1,3 @@
- Cluster name - Kubitect      

Cluster metadata🔗︎

Configuration🔗︎

Cluster name🔗︎

v2.0.0 Required

The cluster name must be defined in the Kubitect configuration, as it acts as a prefix for all cluster resources.

cluster:
+ Cluster name - Kubitect      

Cluster metadata🔗︎

Configuration🔗︎

Cluster name🔗︎

v2.0.0 Required

The cluster name must be defined in the Kubitect configuration, as it acts as a prefix for all cluster resources.

cluster:
   name: my-cluster
 

For instance, each virtual machine name is generated as <cluster.name>-<node.type>-<node.instance.id>. Therefore, the name of the virtual machine for the worker node with ID 1 would be my-cluster-master-1.

Note

Cluster name cannot contain prefix local, as it is reserved for local clusters (created with --local flag).

\ No newline at end of file diff --git a/main/user-guide/configuration/cluster-network/index.html b/main/user-guide/configuration/cluster-network/index.html index f0b82401..393dbcde 100644 --- a/main/user-guide/configuration/cluster-network/index.html +++ b/main/user-guide/configuration/cluster-network/index.html @@ -1,4 +1,4 @@ - Cluster network - Kubitect

Cluster network🔗︎

Network section of the Kubitect configuration file defines the properties of the network to be created or the network to which the cluster nodes are to be assigned.

Configuration🔗︎

Network mode🔗︎

v2.0.0 Required

Kubitect supports two network modes: NAT and bridge.

cluster:
+ Cluster network - Kubitect      

Cluster network🔗︎

Network section of the Kubitect configuration file defines the properties of the network to be created or the network to which the cluster nodes are to be assigned.

Configuration🔗︎

Network mode🔗︎

v2.0.0 Required

Kubitect supports two network modes: NAT and bridge.

cluster:
   network:
     mode: nat
 

NAT mode🔗︎

In NAT (Network Address Translation) mode, the libvirt virtual network is created for the cluster, which reduces the need for manual configurations. However, it's limited to a single host, i.e., a single physical server.

Bridge mode🔗︎

In bridge mode, a real host network device is shared with the virtual machines, allowing each virtual machine to bind to any available IP address on the local network, just like a physical computer. This approach makes the virtual machine visible on the network, enabling the creation of clusters across multiple physical servers.

To use bridged networks, you need to preconfigure the bridge interface on each target host. This is necessary because each environment is unique. For instance, you might use link aggregation (also known as link bonding or teaming), which cannot be detected automatically and therefore requires manual configuration. The Network bridge example provides instructions on how to create a bridge interface with netplan and configure Kubitect to use it.

Network CIDR🔗︎

v2.0.0 Required

The network CIDR (Classless Inter-Domain Routing) represents the network in the form of <network_ip>/<network_prefix_bits>. All IP addresses specified in the cluster section of the configuration must be within this network range, including the network gateway, node instances, floating IP of the load balancer, and so on.

In NAT network mode, the network CIDR defines an unused private network that is created. In bridge mode, the network CIDR should specify the network to which the cluster belongs.

cluster:
diff --git a/main/user-guide/configuration/cluster-node-template/index.html b/main/user-guide/configuration/cluster-node-template/index.html
index d73012cd..8148d738 100644
--- a/main/user-guide/configuration/cluster-node-template/index.html
+++ b/main/user-guide/configuration/cluster-node-template/index.html
@@ -1,4 +1,4 @@
- Cluster node template - Kubitect      

Cluster node template🔗︎

The node template section of the cluster configuration defines the properties of all nodes in the cluster. This includes the properties of the operating system (OS), DNS, and the virtual machine user.

Configuration🔗︎

Virtual machine user🔗︎

v2.0.0 Default: k8s

The user property defines the name of the user created on each virtual machine. This user is used to access the virtual machines during cluster configuration. If you omit the user property, a user named k8s is created on all virtual machines. You can also use this user later to access each virtual machine via SSH.

cluster:
+ Cluster node template - Kubitect      

Cluster node template🔗︎

The node template section of the cluster configuration defines the properties of all nodes in the cluster. This includes the properties of the operating system (OS), DNS, and the virtual machine user.

Configuration🔗︎

Virtual machine user🔗︎

v2.0.0 Default: k8s

The user property defines the name of the user created on each virtual machine. This user is used to access the virtual machines during cluster configuration. If you omit the user property, a user named k8s is created on all virtual machines. You can also use this user later to access each virtual machine via SSH.

cluster:
   nodeTemplate:
     user: kubitect
 

Operating system (OS)🔗︎

OS distribution🔗︎

v2.1.0 Default: ubuntu22

The operating system for virtual machines can be specified in the node template. By default, the Ubuntu distribution is installed on all virtual machines.

You can select a desired distribution by setting the os.distro property.

cluster:
diff --git a/main/user-guide/configuration/cluster-nodes/index.html b/main/user-guide/configuration/cluster-nodes/index.html
index 9a7f3d02..7a52c269 100644
--- a/main/user-guide/configuration/cluster-nodes/index.html
+++ b/main/user-guide/configuration/cluster-nodes/index.html
@@ -1,4 +1,4 @@
- Cluster nodes - Kubitect      

Cluster nodes🔗︎

Background🔗︎

Kubitect allows configuration of three distinct node types: worker nodes, master nodes (control plane), and load balancers.

Worker nodes🔗︎

Worker nodes in a Kubernetes cluster are responsible for executing the application workloads of the system. The addition of more worker nodes to the cluster enhances redundancy in case of worker node failure. However, allocating more resources to each worker node provides less overhead and more resources for the actual applications.

Kubitect does not offer automatic scaling of worker nodes based on resource demand. However, you can easily add or remove worker nodes by applying modified cluster configuration.

Master nodes🔗︎

The master node plays a vital role in a Kubernetes cluster as it manages the overall state of the system and coordinates the workloads running on the worker nodes. Therefore, it is essential to configure at least one master node for every cluster.

Please note that Kubitect currently supports only a stacked control plane where etcd key-value stores are deployed on control plane nodes. To ensure the best possible fault tolerance, it is important to configure an odd number of control plane nodes. For more information, please refer to the etcd FAQ.

Load balancer nodes🔗︎

In a Kubernetes cluster with multiple control plane nodes, it is necessary to configure at least one load balancer. A load balancer distributes incoming network traffic across multiple control plane nodes, ensuring the cluster operates normally even if any control plane node fails.

However, configuring only one load balancer represents a single point of failure for the cluster. If it fails, incoming traffic will not be distributed to the control plane nodes, potentially resulting in downtime. Therefore, configuring multiple load balancers is essential to ensure high availability for the cluster.

Nodes configuration structure🔗︎

The configuration structure for the nodes is as follows:

cluster:
+ Cluster nodes - Kubitect      

Cluster nodes🔗︎

Background🔗︎

Kubitect allows configuration of three distinct node types: worker nodes, master nodes (control plane), and load balancers.

Worker nodes🔗︎

Worker nodes in a Kubernetes cluster are responsible for executing the application workloads of the system. The addition of more worker nodes to the cluster enhances redundancy in case of worker node failure. However, allocating more resources to each worker node provides less overhead and more resources for the actual applications.

Kubitect does not offer automatic scaling of worker nodes based on resource demand. However, you can easily add or remove worker nodes by applying modified cluster configuration.

Master nodes🔗︎

The master node plays a vital role in a Kubernetes cluster as it manages the overall state of the system and coordinates the workloads running on the worker nodes. Therefore, it is essential to configure at least one master node for every cluster.

Please note that Kubitect currently supports only a stacked control plane where etcd key-value stores are deployed on control plane nodes. To ensure the best possible fault tolerance, it is important to configure an odd number of control plane nodes. For more information, please refer to the etcd FAQ.

Load balancer nodes🔗︎

In a Kubernetes cluster with multiple control plane nodes, it is necessary to configure at least one load balancer. A load balancer distributes incoming network traffic across multiple control plane nodes, ensuring the cluster operates normally even if any control plane node fails.

However, configuring only one load balancer represents a single point of failure for the cluster. If it fails, incoming traffic will not be distributed to the control plane nodes, potentially resulting in downtime. Therefore, configuring multiple load balancers is essential to ensure high availability for the cluster.

Nodes configuration structure🔗︎

The configuration structure for the nodes is as follows:

cluster:
   nodes:
     masters:
       ...
diff --git a/main/user-guide/configuration/hosts/index.html b/main/user-guide/configuration/hosts/index.html
index efb2f225..fae3f9d5 100644
--- a/main/user-guide/configuration/hosts/index.html
+++ b/main/user-guide/configuration/hosts/index.html
@@ -1,4 +1,4 @@
- Hosts - Kubitect      

Hosts configuration🔗︎

Defining hosts is an essential step when deploying a Kubernetes cluster with Kubitect. Hosts represent the target servers where the cluster will be deployed.

Every valid configuration must contain at least one host, which can be either local or remote. However, you can add as many hosts as needed to support your cluster deployment.

Configuration🔗︎

Localhost🔗︎

v2.0.0

To configure a local host, you simply need to specify a host with the connection type set to local.

hosts:
+ Hosts - Kubitect      

Hosts configuration🔗︎

Defining hosts is an essential step when deploying a Kubernetes cluster with Kubitect. Hosts represent the target servers where the cluster will be deployed.

Every valid configuration must contain at least one host, which can be either local or remote. However, you can add as many hosts as needed to support your cluster deployment.

Configuration🔗︎

Localhost🔗︎

v2.0.0

To configure a local host, you simply need to specify a host with the connection type set to local.

hosts:
   - name: localhost # (1)!
     connection:
       type: local
diff --git a/main/user-guide/configuration/kubernetes/index.html b/main/user-guide/configuration/kubernetes/index.html
index 1459c25d..b35b6ebf 100644
--- a/main/user-guide/configuration/kubernetes/index.html
+++ b/main/user-guide/configuration/kubernetes/index.html
@@ -1,4 +1,4 @@
- Kubernetes - Kubitect      

Kubernetes configuration🔗︎

The Kubernetes section of the configuration file contains properties that are specific to Kubernetes, such as the Kubernetes version and network plugin.

Configuration🔗︎

Kubernetes manager🔗︎

v3.4.0 Default: kubespray

Specify manager that is used for deploying Kubernetes cluster. Supported values are kubespray and k3s.

kubernetes:
+ Kubernetes - Kubitect      

Kubernetes configuration🔗︎

The Kubernetes section of the configuration file contains properties that are specific to Kubernetes, such as the Kubernetes version and network plugin.

Configuration🔗︎

Kubernetes manager🔗︎

v3.4.0 Default: kubespray

Specify manager that is used for deploying Kubernetes cluster. Supported values are kubespray and k3s.

kubernetes:
   manager: k3s
 

Warning

Support for K3s manager has been added recently, therefore, it may not be fully stable.

Kubernetes version🔗︎

v3.0.0 Default: v1.28.6

By default, the Kubernetes cluster will be deployed using version v1.28.6, but you can specify a different version if necessary.

kubernetes:
   version: v1.28.6
diff --git a/main/user-guide/management/destroying/index.html b/main/user-guide/management/destroying/index.html
index 2a9a7ad7..d42307fa 100644
--- a/main/user-guide/management/destroying/index.html
+++ b/main/user-guide/management/destroying/index.html
@@ -1,2 +1,2 @@
- Destroying the cluster - Kubitect      

Destroying the cluster🔗︎

Destroy the cluster🔗︎

Important

This action is irreversible and any data stored within the cluster will be lost.

To destroy a specific cluster, simply run the destroy command, specifying the name of the cluster to be destroyed.

kubitect destroy --cluster my-cluster
+ Destroying the cluster - Kubitect      

Destroying the cluster🔗︎

Destroy the cluster🔗︎

Important

This action is irreversible and any data stored within the cluster will be lost.

To destroy a specific cluster, simply run the destroy command, specifying the name of the cluster to be destroyed.

kubitect destroy --cluster my-cluster
 

Keep in mind that this action will permanently remove all resources associated with the cluster, including virtual machines, resource pools and configuration files.

\ No newline at end of file diff --git a/main/user-guide/management/scaling/index.html b/main/user-guide/management/scaling/index.html index 4d470f96..312c630f 100644 --- a/main/user-guide/management/scaling/index.html +++ b/main/user-guide/management/scaling/index.html @@ -1,4 +1,4 @@ - Scaling the cluster - Kubitect

Scaling the cluster🔗︎

Any cluster created with Kubitect can be subsequently scaled. To do so, simply change the configuration and reapply it using the scale action.

Info

Currently, only worker nodes and load balancers can be scaled.

Export the cluster configuration🔗︎

Exporting the current cluster configuration is optional, but strongly recommended to ensure that changes are made to the latest version of the configuration. The cluster configuration file can be exported using the export command.

kubitect export config --cluster my-cluster > cluster.yaml
+ Scaling the cluster - Kubitect      

Scaling the cluster🔗︎

Any cluster created with Kubitect can be subsequently scaled. To do so, simply change the configuration and reapply it using the scale action.

Info

Currently, only worker nodes and load balancers can be scaled.

Export the cluster configuration🔗︎

Exporting the current cluster configuration is optional, but strongly recommended to ensure that changes are made to the latest version of the configuration. The cluster configuration file can be exported using the export command.

kubitect export config --cluster my-cluster > cluster.yaml
 

Scale the cluster🔗︎

In the configuration file, add new or remove existing nodes.

cluster.yaml
cluster:
   ...
   nodes:
diff --git a/main/user-guide/management/upgrading/index.html b/main/user-guide/management/upgrading/index.html
index 013db82e..e80a37dc 100644
--- a/main/user-guide/management/upgrading/index.html
+++ b/main/user-guide/management/upgrading/index.html
@@ -1,4 +1,4 @@
- Upgrading the cluster - Kubitect      

Upgrading the cluster🔗︎

A running Kubernetes cluster can be upgraded to a higher version by increasing the Kubernetes version in the cluster's configuration file and reapplying it using the upgrade action.

Export the cluster configuration🔗︎

Exporting the current cluster configuration is optional, but strongly recommended to ensure that changes are made to the latest version of the configuration. The cluster configuration file can be exported using the export command.

kubitect export config --cluster my-cluster > cluster.yaml
+ Upgrading the cluster - Kubitect      

Upgrading the cluster🔗︎

A running Kubernetes cluster can be upgraded to a higher version by increasing the Kubernetes version in the cluster's configuration file and reapplying it using the upgrade action.

Export the cluster configuration🔗︎

Exporting the current cluster configuration is optional, but strongly recommended to ensure that changes are made to the latest version of the configuration. The cluster configuration file can be exported using the export command.

kubitect export config --cluster my-cluster > cluster.yaml
 

Upgrade the cluster🔗︎

In the cluster configuration file, change the Kubernetes version.

cluster.yaml
kubernetes:
   version: v1.24.5 # Old value: v1.23.6
   ...
diff --git a/main/user-guide/reference/cli/index.html b/main/user-guide/reference/cli/index.html
index c74a6fb5..99c5d9dd 100644
--- a/main/user-guide/reference/cli/index.html
+++ b/main/user-guide/reference/cli/index.html
@@ -1,4 +1,4 @@
- CLI tool reference - Kubitect      

CLI reference🔗︎

This document contains a reference of the Kubitect CLI tool. It documents each command along with its flags.

Tip

All available commands can be displayed by running kubitect --help or simply kubitect -h.

To see the help for a particular command, run kubitect command -h.

Kubitect commands🔗︎

kubitect apply🔗︎

Apply the cluster configuration.

Usage

kubitect apply [flags]
+ CLI tool reference - Kubitect      

CLI reference🔗︎

This document contains a reference of the Kubitect CLI tool. It documents each command along with its flags.

Tip

All available commands can be displayed by running kubitect --help or simply kubitect -h.

To see the help for a particular command, run kubitect command -h.

Kubitect commands🔗︎

kubitect apply🔗︎

Apply the cluster configuration.

Usage

kubitect apply [flags]
 

Flags

  • -a, --action <string>
      cluster action: create | scale | upgrade (default: create)
  • --auto-approve
      automatically approve any user permission requests
  • -c, --config <string>
      path to the cluster config file
  • -l, --local
      use a current directory as the cluster path

kubitect destroy🔗︎

Destroy the cluster with a given name. Executing the following command will permanently delete all resources associated with the cluster, including virtual machines and configuration files.

Important

Please be aware that this action is irreversible and any data stored within the cluster will be lost.

Usage

kubitect destroy [flags]
 

Flags

  • --auto-approve
      automatically approve any user permission requests
  • --cluster <string>
      name of the cluster to be used (default: default)

kubitect export config🔗︎

Print cluster's configuration file to the standard output.

Usage

kubitect export config [flags]
 

Flags

  • --cluster <string>
      name of the cluster to be used (default: default)

kubitect export kubeconfig🔗︎

Print cluster's kubeconfig to the standard output.

Usage

kubitect export kubeconfig [flags]
diff --git a/main/user-guide/reference/configuration/index.html b/main/user-guide/reference/configuration/index.html
index e166af9e..8b7d2811 100644
--- a/main/user-guide/reference/configuration/index.html
+++ b/main/user-guide/reference/configuration/index.html
@@ -1 +1 @@
- Configuration reference - Kubitect      

Configuration reference🔗︎

This document contains a reference of the Kubitect configuration file and documents all possible configuration properties.

The configuration sections are as follows:

  • hosts - A list of physical hosts (local or remote).
  • cluster - Configuration of the cluster infrastructure. Virtual machine properties, node types to install, and the host on which to install the nodes.
  • kubernetes - Kubernetes configuration.
  • addons - Configurable addons and applications.

Each configuration property is documented with 5 columns: Property name, description, type, default value and is the property required.

Note

[*] annotates an array.

Hosts section🔗︎

Name Type Default value Required? Description
hosts[*].connection.ip string Yes, if connection.type is set to remote IP address is used to SSH into the remote machine.
hosts[*].connection.ssh.keyfile string ~/.ssh/id_rsa Path to the keyfile that is used to SSH into the remote machine
hosts[*].connection.ssh.port number 22 The port number of SSH protocol for remote machine.
hosts[*].connection.ssh.verify boolean false If true, the SSH host is verified, which means that the host must be present in the known SSH hosts.
hosts[*].connection.type string Yes Possible values are:
  • local or localhost
  • remote
hosts[*].connection.user string Yes, if connection.type is set to remote Username is used to SSH into the remote machine.
hosts[*].dataResourcePools[*].name string Name of the data resource pool. Must be unique within the same host. It is used to link virtual machine volumes to the specific resource pool.
hosts[*].dataResourcePools[*].path string /var/lib/libvirt/images/ Host path to the location where data resource pool is created.
hosts[*].default boolean false Nodes where host is not specified will be installed on default host. The first host in the list is used as a default host if none is marked as a default.
hosts[*].name string Yes Custom server name used to link nodes with physical hosts.
hosts[*].mainResourcePoolPath string /var/lib/libvirt/images/ Path to the resource pool used for main virtual machine volumes.

Cluster section🔗︎

Name Type Default value Required? Description
cluster.name string Yes Custom cluster name that is used as a prefix for various cluster components.
Note: cluster name cannot contain prefix local.
cluster.network.bridge string virbr0 By default virbr0 is set as a name of virtual bridge. In case network mode is set to bridge, name of the preconfigured bridge needs to be set here.
cluster.network.cidr string Yes Network cidr that contains network IP with network mask bits (IPv4/mask_bits).
cluster.network.gateway string First client IP in network. By default first client IP is taken as a gateway. If network cidr is set to 10.0.0.0/24 then gateway would be 10.0.0.1. Set gateway if it differs from default value.
cluster.network.mode string Yes Network mode. Possible values are:
  • nat - Creates virtual local network.
  • bridge - Uses preconfigured bridge interface on the machine (Only bridge mode supports multiple hosts).
  • route - Creates virtual local network, but does not apply NAT.
cluster.nodes.loadBalancer.default.cpu number 2 Default number of vCPU allocated to a load balancer instance.
cluster.nodes.loadBalancer.default.mainDiskSize number 32 Size of the main disk (in GiB) that is attached to a load balancer instance.
cluster.nodes.loadBalancer.default.ram number 4 Default amount of RAM (in GiB) allocated to a load balancer instance.
cluster.nodes.loadBalancer.forwardPorts[*].name string Yes, if port is configured Unique name of the forwarded port.
cluster.nodes.loadBalancer.forwardPorts[*].port number Yes, if port is configured Incoming port is the port on which a load balancer listens for the incoming traffic.
cluster.nodes.loadBalancer.forwardPorts[*].targetPort number Incoming port value Target port is the port on which a load balancer forwards traffic.
cluster.nodes.loadBalancer.forwardPorts[*].target string workers Target is a group of nodes on which a load balancer forwards traffic. Possible targets are:
  • masters
  • workers
  • all
cluster.nodes.loadBalancer.instances[*].cpu number Overrides a default value for that specific instance.
cluster.nodes.loadBalancer.instances[*].host string Name of the host on which the instance is deployed. If the name is not specified, the instance is deployed on the default host.
cluster.nodes.loadBalancer.instances[*].id string Yes Unique identifier of a load balancer instance.
cluster.nodes.loadBalancer.instances[*].ip string If an IP is set for an instance then the instance will use it as a static IP. Otherwise it will try to request an IP from a DHCP server.
cluster.nodes.loadBalancer.instances[*].mac string MAC used by the instance. If it is not set, it will be generated.
cluster.nodes.loadBalancer.instances[*].mainDiskSize number Overrides a default value for that specific instance.
cluster.nodes.loadBalancer.instances[*].priority number 10 Keepalived priority of the load balancer. A load balancer with the highest priority becomes the leader (active). The priority can be set to any number between 0 and 255.
cluster.nodes.loadBalancer.instances[*].ram number Overrides a default value for the RAM for that instance.
cluster.nodes.loadBalancer.vip string Yes, if more then one instance of load balancer is specified. Virtual IP (floating IP) is the static IP used by load balancers to provide a fail-over. Each load balancer still has its own IP beside the shared one.
cluster.nodes.loadBalancer.virtualRouterId number 51 Virtual router ID identifies the group of VRRP routers. It can be any number between 0 and 255 and should be unique among different clusters.
cluster.nodes.master.default.cpu number 2 Default number of vCPU allocated to a master node.
cluster.nodes.master.default.labels dictionary Array of default node labels that are applied to all master nodes.
cluster.nodes.master.default.mainDiskSize number 32 Size of the main disk (in GiB) that is attached to a master node.
cluster.nodes.master.default.ram number 4 Default amount of RAM (in GiB) allocated to a master node.
cluster.nodes.master.default.taints list List of default node taints that are applied to all master nodes.
cluster.nodes.master.instances[*].cpu number Overrides a default value for that specific instance.
cluster.nodes.master.instances[*].dataDisks[*].name string Name of the additional data disk that is attached to the master node.
cluster.nodes.master.instances[*].dataDisks[*].pool string main Name of the data resource pool where the additional data disk is created. Referenced resource pool must be configure on the same host.
cluster.nodes.master.instances[*].dataDisks[*].size string Size of the additional data disk (in GiB) that is attached to the master node.
cluster.nodes.master.instances[*].host string Name of the host on which the instance is deployed. If the name is not specified, the instance is deployed on the default host.
cluster.nodes.master.instances[*].id string Yes Unique identifier of a master node.
cluster.nodes.master.instances[*].ip string If an IP is set for an instance then the instance will use it as a static IP. Otherwise it will try to request an IP from a DHCP server.
cluster.nodes.master.instances[*].labels dictionary Array of node labels that are applied to this specific master node.
cluster.nodes.master.instances[*].mac string MAC used by the instance. If it is not set, it will be generated.
cluster.nodes.master.instances[*].mainDiskSize number Overrides a default value for that specific instance.
cluster.nodes.master.instances[*].ram number Overrides a default value for the RAM for that instance.
cluster.nodes.master.instances[*].taints list List of node taints that are applied to this specific master node.
cluster.nodes.worker.default.cpu number 2 Default number of vCPU allocated to a worker node.
cluster.nodes.worker.default.labels dictionary Array of default node labels that are applied to all worker nodes.
cluster.nodes.worker.default.mainDiskSize number 32 Size of the main disk (in GiB) that is attached to a worker node.
cluster.nodes.worker.default.ram number 4 Default amount of RAM (in GiB) allocated to a worker node.
cluster.nodes.worker.default.taints list List of default node taints that are applied to all worker nodes.
cluster.nodes.worker.instances[*].cpu number Overrides a default value for that specific instance.
cluster.nodes.worker.instances[*].dataDisks[*].name string Name of the additional data disk that is attached to the worker node.
cluster.nodes.worker.instances[*].dataDisks[*].pool string main Name of the data resource pool where the additional data disk is created. Referenced resource pool must be configure on the same host.
cluster.nodes.worker.instances[*].dataDisks[*].size string Size of the additional data disk (in GiB) that is attached to the worker node.
cluster.nodes.worker.instances[*].host string Name of the host on which the instance is deployed. If the name is not specified, the instance is deployed on the default host.
cluster.nodes.worker.instances[*].id string Yes Unique identifier of a worker node.
cluster.nodes.worker.instances[*].ip string If an IP is set for an instance then the instance will use it as a static IP. Otherwise it will try to request an IP from a DHCP server.
cluster.nodes.worker.instances[*].labels dictionary Array of node labels that are applied to this specific worker node.
cluster.nodes.worker.instances[*].mac string MAC used by the instance. If it is not set, it will be generated.
cluster.nodes.worker.instances[*].mainDiskSize number Overrides a default value for that specific instance.
cluster.nodes.worker.instances[*].ram number Overrides a default value for the RAM for that instance.
cluster.nodes.worker.instances[*].taints list List of node taints that are applied to this specific worker node.
cluster.nodeTemplate.cpuMode string custom Guest virtual machine CPU mode.
cluster.nodeTemplate.dns list Value of network.gateway Custom DNS list used by all created virtual machines. If none is provided, network gateway is used.
cluster.nodeTemplate.os.distro string ubuntu22 Set OS distribution. Possible values are:
  • ubuntu20
  • ubuntu22
  • debian11
  • debian12
  • centos9
  • rocky9
cluster.nodeTemplate.os.networkInterface string Depends on os.distro Network interface used by virtual machines to connect to the network. Network interface is preconfigured for each OS image (usually ens3 or eth0). By default, the value from distro preset (/terraform/defaults.yaml) is set, but can be overwritten if needed.
cluster.nodeTemplate.os.source string Depends on os.distro Source of an OS image. It can be either path on a local file system or an URL of the image. By default, the value from distro preset (/terraform/defaults.yaml)isset, but can be overwritten if needed.
cluster.nodeTemplate.ssh.addToKnownHosts boolean false If set to true, each virtual machine will be added to the known hosts on the machine where the project is being run. Note that all machines will also be removed from known hosts when destroying the cluster.
cluster.nodeTemplate.ssh.privateKeyPath string Path to private key that is later used to SSH into each virtual machine. On the same path with .pub prefix needs to be present public key. If this value is not set, SSH key will be generated in ./config/.ssh/ directory.
cluster.nodeTemplate.updateOnBoot boolean true If set to true, the operating system will be updated when it boots.
cluster.nodeTemplate.user string k8s User created on each virtual machine.

Kubernetes section🔗︎

Name Type Default value Required? Description
kubernetes.dnsMode string coredns DNS server used within a Kubernetes cluster. Possible values are:
  • coredns
kubernetes.manager string kubespray Manager that is used for deploying Kubernetes cluster. Possible values are:
  • kubespray
  • k3s
kubernetes.networkPlugin string calico Network plugin used within a Kubernetes cluster. Possible values are:
  • calico
  • cilium
  • flannel
  • kube-router
Note: k3s manager currently supports only flannel.
kubernetes.other.autoRenewCertificates boolean false When this property is set to true, control plane certificates are renewed first Monday of each month.
kubernetes.other.mergeKubeconfig boolean false When this property is set to true, the kubeconfig of a new cluster is merged to the config on path ~/.kube/config.
kubernetes.version string v1.28.6 Kubernetes version that will be installed.

Addons section🔗︎

Name Type Default value Required? Description
addons.kubespray dictionary Kubespray addons configuration.
addons.rook.enabled boolean false Enable Rook addon.
addons.rook.nodeSelector dictionary Dictionary containing node labels ("key: value"). Rook is deployed on the nodes that match all the given labels.
addons.rook.version string Rook version. By default, the latest release version is used.
\ No newline at end of file + Configuration reference - Kubitect

Configuration reference🔗︎

This document contains a reference of the Kubitect configuration file and documents all possible configuration properties.

The configuration sections are as follows:

  • hosts - A list of physical hosts (local or remote).
  • cluster - Configuration of the cluster infrastructure. Virtual machine properties, node types to install, and the host on which to install the nodes.
  • kubernetes - Kubernetes configuration.
  • addons - Configurable addons and applications.

Each configuration property is documented with 5 columns: Property name, description, type, default value and is the property required.

Note

[*] annotates an array.

Hosts section🔗︎

Name Type Default value Required? Description
hosts[*].connection.ip string Yes, if connection.type is set to remote IP address is used to SSH into the remote machine.
hosts[*].connection.ssh.keyfile string ~/.ssh/id_rsa Path to the keyfile that is used to SSH into the remote machine
hosts[*].connection.ssh.port number 22 The port number of SSH protocol for remote machine.
hosts[*].connection.ssh.verify boolean false If true, the SSH host is verified, which means that the host must be present in the known SSH hosts.
hosts[*].connection.type string Yes Possible values are:
  • local or localhost
  • remote
hosts[*].connection.user string Yes, if connection.type is set to remote Username is used to SSH into the remote machine.
hosts[*].dataResourcePools[*].name string Name of the data resource pool. Must be unique within the same host. It is used to link virtual machine volumes to the specific resource pool.
hosts[*].dataResourcePools[*].path string /var/lib/libvirt/images/ Host path to the location where data resource pool is created.
hosts[*].default boolean false Nodes where host is not specified will be installed on default host. The first host in the list is used as a default host if none is marked as a default.
hosts[*].name string Yes Custom server name used to link nodes with physical hosts.
hosts[*].mainResourcePoolPath string /var/lib/libvirt/images/ Path to the resource pool used for main virtual machine volumes.

Cluster section🔗︎

Name Type Default value Required? Description
cluster.name string Yes Custom cluster name that is used as a prefix for various cluster components.
Note: cluster name cannot contain prefix local.
cluster.network.bridge string virbr0 By default virbr0 is set as a name of virtual bridge. In case network mode is set to bridge, name of the preconfigured bridge needs to be set here.
cluster.network.cidr string Yes Network cidr that contains network IP with network mask bits (IPv4/mask_bits).
cluster.network.gateway string First client IP in network. By default first client IP is taken as a gateway. If network cidr is set to 10.0.0.0/24 then gateway would be 10.0.0.1. Set gateway if it differs from default value.
cluster.network.mode string Yes Network mode. Possible values are:
  • nat - Creates virtual local network.
  • bridge - Uses preconfigured bridge interface on the machine (Only bridge mode supports multiple hosts).
  • route - Creates virtual local network, but does not apply NAT.
cluster.nodes.loadBalancer.default.cpu number 2 Default number of vCPU allocated to a load balancer instance.
cluster.nodes.loadBalancer.default.mainDiskSize number 32 Size of the main disk (in GiB) that is attached to a load balancer instance.
cluster.nodes.loadBalancer.default.ram number 4 Default amount of RAM (in GiB) allocated to a load balancer instance.
cluster.nodes.loadBalancer.forwardPorts[*].name string Yes, if port is configured Unique name of the forwarded port.
cluster.nodes.loadBalancer.forwardPorts[*].port number Yes, if port is configured Incoming port is the port on which a load balancer listens for the incoming traffic.
cluster.nodes.loadBalancer.forwardPorts[*].targetPort number Incoming port value Target port is the port on which a load balancer forwards traffic.
cluster.nodes.loadBalancer.forwardPorts[*].target string workers Target is a group of nodes on which a load balancer forwards traffic. Possible targets are:
  • masters
  • workers
  • all
cluster.nodes.loadBalancer.instances[*].cpu number Overrides a default value for that specific instance.
cluster.nodes.loadBalancer.instances[*].host string Name of the host on which the instance is deployed. If the name is not specified, the instance is deployed on the default host.
cluster.nodes.loadBalancer.instances[*].id string Yes Unique identifier of a load balancer instance.
cluster.nodes.loadBalancer.instances[*].ip string If an IP is set for an instance then the instance will use it as a static IP. Otherwise it will try to request an IP from a DHCP server.
cluster.nodes.loadBalancer.instances[*].mac string MAC used by the instance. If it is not set, it will be generated.
cluster.nodes.loadBalancer.instances[*].mainDiskSize number Overrides a default value for that specific instance.
cluster.nodes.loadBalancer.instances[*].priority number 10 Keepalived priority of the load balancer. A load balancer with the highest priority becomes the leader (active). The priority can be set to any number between 0 and 255.
cluster.nodes.loadBalancer.instances[*].ram number Overrides a default value for the RAM for that instance.
cluster.nodes.loadBalancer.vip string Yes, if more then one instance of load balancer is specified. Virtual IP (floating IP) is the static IP used by load balancers to provide a fail-over. Each load balancer still has its own IP beside the shared one.
cluster.nodes.loadBalancer.virtualRouterId number 51 Virtual router ID identifies the group of VRRP routers. It can be any number between 0 and 255 and should be unique among different clusters.
cluster.nodes.master.default.cpu number 2 Default number of vCPU allocated to a master node.
cluster.nodes.master.default.labels dictionary Array of default node labels that are applied to all master nodes.
cluster.nodes.master.default.mainDiskSize number 32 Size of the main disk (in GiB) that is attached to a master node.
cluster.nodes.master.default.ram number 4 Default amount of RAM (in GiB) allocated to a master node.
cluster.nodes.master.default.taints list List of default node taints that are applied to all master nodes.
cluster.nodes.master.instances[*].cpu number Overrides a default value for that specific instance.
cluster.nodes.master.instances[*].dataDisks[*].name string Name of the additional data disk that is attached to the master node.
cluster.nodes.master.instances[*].dataDisks[*].pool string main Name of the data resource pool where the additional data disk is created. Referenced resource pool must be configure on the same host.
cluster.nodes.master.instances[*].dataDisks[*].size string Size of the additional data disk (in GiB) that is attached to the master node.
cluster.nodes.master.instances[*].host string Name of the host on which the instance is deployed. If the name is not specified, the instance is deployed on the default host.
cluster.nodes.master.instances[*].id string Yes Unique identifier of a master node.
cluster.nodes.master.instances[*].ip string If an IP is set for an instance then the instance will use it as a static IP. Otherwise it will try to request an IP from a DHCP server.
cluster.nodes.master.instances[*].labels dictionary Array of node labels that are applied to this specific master node.
cluster.nodes.master.instances[*].mac string MAC used by the instance. If it is not set, it will be generated.
cluster.nodes.master.instances[*].mainDiskSize number Overrides a default value for that specific instance.
cluster.nodes.master.instances[*].ram number Overrides a default value for the RAM for that instance.
cluster.nodes.master.instances[*].taints list List of node taints that are applied to this specific master node.
cluster.nodes.worker.default.cpu number 2 Default number of vCPU allocated to a worker node.
cluster.nodes.worker.default.labels dictionary Array of default node labels that are applied to all worker nodes.
cluster.nodes.worker.default.mainDiskSize number 32 Size of the main disk (in GiB) that is attached to a worker node.
cluster.nodes.worker.default.ram number 4 Default amount of RAM (in GiB) allocated to a worker node.
cluster.nodes.worker.default.taints list List of default node taints that are applied to all worker nodes.
cluster.nodes.worker.instances[*].cpu number Overrides a default value for that specific instance.
cluster.nodes.worker.instances[*].dataDisks[*].name string Name of the additional data disk that is attached to the worker node.
cluster.nodes.worker.instances[*].dataDisks[*].pool string main Name of the data resource pool where the additional data disk is created. Referenced resource pool must be configure on the same host.
cluster.nodes.worker.instances[*].dataDisks[*].size string Size of the additional data disk (in GiB) that is attached to the worker node.
cluster.nodes.worker.instances[*].host string Name of the host on which the instance is deployed. If the name is not specified, the instance is deployed on the default host.
cluster.nodes.worker.instances[*].id string Yes Unique identifier of a worker node.
cluster.nodes.worker.instances[*].ip string If an IP is set for an instance then the instance will use it as a static IP. Otherwise it will try to request an IP from a DHCP server.
cluster.nodes.worker.instances[*].labels dictionary Array of node labels that are applied to this specific worker node.
cluster.nodes.worker.instances[*].mac string MAC used by the instance. If it is not set, it will be generated.
cluster.nodes.worker.instances[*].mainDiskSize number Overrides a default value for that specific instance.
cluster.nodes.worker.instances[*].ram number Overrides a default value for the RAM for that instance.
cluster.nodes.worker.instances[*].taints list List of node taints that are applied to this specific worker node.
cluster.nodeTemplate.cpuMode string custom Guest virtual machine CPU mode.
cluster.nodeTemplate.dns list Value of network.gateway Custom DNS list used by all created virtual machines. If none is provided, network gateway is used.
cluster.nodeTemplate.os.distro string ubuntu22 Set OS distribution. Possible values are:
  • ubuntu20
  • ubuntu22
  • debian11
  • debian12
  • centos9
  • rocky9
cluster.nodeTemplate.os.networkInterface string Depends on os.distro Network interface used by virtual machines to connect to the network. Network interface is preconfigured for each OS image (usually ens3 or eth0). By default, the value from distro preset (/terraform/defaults.yaml) is set, but can be overwritten if needed.
cluster.nodeTemplate.os.source string Depends on os.distro Source of an OS image. It can be either path on a local file system or an URL of the image. By default, the value from distro preset (/terraform/defaults.yaml)isset, but can be overwritten if needed.
cluster.nodeTemplate.ssh.addToKnownHosts boolean false If set to true, each virtual machine will be added to the known hosts on the machine where the project is being run. Note that all machines will also be removed from known hosts when destroying the cluster.
cluster.nodeTemplate.ssh.privateKeyPath string Path to private key that is later used to SSH into each virtual machine. On the same path with .pub prefix needs to be present public key. If this value is not set, SSH key will be generated in ./config/.ssh/ directory.
cluster.nodeTemplate.updateOnBoot boolean true If set to true, the operating system will be updated when it boots.
cluster.nodeTemplate.user string k8s User created on each virtual machine.

Kubernetes section🔗︎

Name Type Default value Required? Description
kubernetes.dnsMode string coredns DNS server used within a Kubernetes cluster. Possible values are:
  • coredns
kubernetes.manager string kubespray Manager that is used for deploying Kubernetes cluster. Possible values are:
  • kubespray
  • k3s
kubernetes.networkPlugin string calico Network plugin used within a Kubernetes cluster. Possible values are:
  • calico
  • cilium
  • flannel
  • kube-router
Note: k3s manager currently supports only flannel.
kubernetes.other.autoRenewCertificates boolean false When this property is set to true, control plane certificates are renewed first Monday of each month.
kubernetes.other.mergeKubeconfig boolean false When this property is set to true, the kubeconfig of a new cluster is merged to the config on path ~/.kube/config.
kubernetes.version string v1.28.6 Kubernetes version that will be installed.

Addons section🔗︎

Name Type Default value Required? Description
addons.kubespray dictionary Kubespray addons configuration.
addons.rook.enabled boolean false Enable Rook addon.
addons.rook.nodeSelector dictionary Dictionary containing node labels ("key: value"). Rook is deployed on the nodes that match all the given labels.
addons.rook.version string Rook version. By default, the latest release version is used.
\ No newline at end of file