Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rename master to control-plane in docs and titles without break change #11452

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -215,7 +215,7 @@ Note: Upstart/SysV init based OS types are not supported.
Hardware:
These limits are safeguarded by Kubespray. Actual requirements for your workload can differ. For a sizing guide go to the [Building Large Clusters](https://kubernetes.io/docs/setup/cluster-large/#size-of-master-and-master-components) guide.

- Master
- Control Plane
- Memory: 1500 MB
- Node
- Memory: 1024 MB
Expand Down
2 changes: 1 addition & 1 deletion Vagrantfile
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ $download_run_once ||= "True"
$download_force_cache ||= "False"
# The first three nodes are etcd servers
$etcd_instances ||= [$num_instances, 3].min
# The first two nodes are kube masters
# The first two nodes are kube control planes
$kube_master_instances ||= [$num_instances, 2].min
# All nodes are kube nodes
$kube_node_instances ||= $num_instances
Expand Down
2 changes: 1 addition & 1 deletion contrib/azurerm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ experience.
## Bastion host

You can enable the use of a Bastion Host by changing **use_bastion** in group_vars/all to **true**. The generated
templates will then include an additional bastion VM which can then be used to connect to the masters and nodes. The option
templates will then include an additional bastion VM which can then be used to connect to the control planes and nodes. The option
also removes all public IPs from all other VMs.

## Generating and applying
Expand Down
4 changes: 2 additions & 2 deletions contrib/azurerm/group_vars/all
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
# this name must be globally unique - it will be used as a prefix for azure components
cluster_name: example

# Set this to true if you do not want to have public IPs for your masters and minions. This will provision a bastion
# node that can be used to access the masters and minions
# Set this to true if you do not want to have public IPs for your control planes and minions. This will provision a bastion
# node that can be used to access the control planes and minions
use_bastion: false

# Set this to a preferred name that will be used as the first part of the dns name for your bastotion host. For example: k8s-bastion.<azureregion>.cloudapp.azure.com.
Expand Down
12 changes: 6 additions & 6 deletions contrib/dind/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,12 +104,12 @@ CONTAINER ID IMAGE COMMAND CREATED STATUS
c581ef662ed2 debian:9.5 "sh -c 'apt-get -qy …" 44 minutes ago Up 44 minutes kube-node1

$ docker exec kube-node1 kubectl get node
NAME STATUS ROLES AGE VERSION
kube-node1 Ready master,node 18m v1.12.1
kube-node2 Ready master,node 17m v1.12.1
kube-node3 Ready node 17m v1.12.1
kube-node4 Ready node 17m v1.12.1
kube-node5 Ready node 17m v1.12.1
NAME STATUS ROLES AGE VERSION
kube-node1 Ready control-plane,node 18m v1.12.1
kube-node2 Ready control-plane,node 17m v1.12.1
kube-node3 Ready node 17m v1.12.1
kube-node4 Ready node 17m v1.12.1
kube-node5 Ready node 17m v1.12.1

$ docker exec kube-node1 kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
Expand Down
8 changes: 4 additions & 4 deletions contrib/network-storage/glusterfs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ You can either deploy using Ansible on its own by supplying your own inventory f

## Using an Ansible inventory

In the same directory of this ReadMe file you should find a file named `inventory.example` which contains an example setup. Please note that, additionally to the Kubernetes nodes/masters, we define a set of machines for GlusterFS and we add them to the group `[gfs-cluster]`, which in turn is added to the larger `[network-storage]` group as a child group.
In the same directory of this ReadMe file you should find a file named `inventory.example` which contains an example setup. Please note that, additionally to the Kubernetes nodes/control-planes, we define a set of machines for GlusterFS and we add them to the group `[gfs-cluster]`, which in turn is added to the larger `[network-storage]` group as a child group.

Change that file to reflect your local setup (adding more machines or removing them and setting the adequate ip numbers), and save it to `inventory/sample/k8s_gfs_inventory`. Make sure that the settings on `inventory/sample/group_vars/all.yml` make sense with your deployment. Then execute change to the kubespray root folder, and execute (supposing that the machines are all using ubuntu):

Expand All @@ -21,9 +21,9 @@ ansible-playbook -b --become-user=root -i inventory/sample/k8s_gfs_inventory --u
If your machines are not using Ubuntu, you need to change the `--user=ubuntu` to the correct user. Alternatively, if your Kubernetes machines are using one OS and your GlusterFS a different one, you can instead specify the `ansible_ssh_user=<correct-user>` variable in the inventory file that you just created, for each machine/VM:

```shell
k8s-master-1 ansible_ssh_host=192.168.0.147 ip=192.168.0.147 ansible_ssh_user=core
k8s-master-node-1 ansible_ssh_host=192.168.0.148 ip=192.168.0.148 ansible_ssh_user=core
k8s-master-node-2 ansible_ssh_host=192.168.0.146 ip=192.168.0.146 ansible_ssh_user=core
k8s-control-plane-1 ansible_ssh_host=192.168.0.147 ip=192.168.0.147 ansible_ssh_user=core
k8s-control-plane-node-1 ansible_ssh_host=192.168.0.148 ip=192.168.0.148 ansible_ssh_user=core
k8s-control-plane-node-2 ansible_ssh_host=192.168.0.146 ip=192.168.0.146 ansible_ssh_user=core
```

## Using Terraform and Ansible
Expand Down
2 changes: 1 addition & 1 deletion contrib/terraform/aws/create-infrastructure.tf
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ resource "aws_instance" "bastion-server" {
}

/*
* Create K8s Master and worker nodes and etcd instances
* Create K8s control plane and worker nodes and etcd instances
*
*/

Expand Down
6 changes: 3 additions & 3 deletions contrib/terraform/aws/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -86,15 +86,15 @@ variable "aws_bastion_num" {
}

variable "aws_kube_master_num" {
description = "Number of Kubernetes Master Nodes"
description = "Number of Kubernetes Control Plane Nodes"
}

variable "aws_kube_master_disk_size" {
description = "Disk size for Kubernetes Master Nodes (in GiB)"
description = "Disk size for Kubernetes Control Plane Nodes (in GiB)"
}

variable "aws_kube_master_size" {
description = "Instance size of Kube Master Nodes"
description = "Instance size of Kube Control Plane Nodes"
}

variable "aws_etcd_num" {
Expand Down
6 changes: 3 additions & 3 deletions contrib/terraform/equinix/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,15 +20,15 @@ to actually install Kubernetes with Kubespray.
You can create many different kubernetes topologies by setting the number of
different classes of hosts.

- Master nodes with etcd: `number_of_k8s_masters` variable
- Master nodes without etcd: `number_of_k8s_masters_no_etcd` variable
- Control plane nodes with etcd: `number_of_k8s_masters` variable
- Control plane nodes without etcd: `number_of_k8s_masters_no_etcd` variable
- Standalone etcd hosts: `number_of_etcd` variable
- Kubernetes worker nodes: `number_of_k8s_nodes` variable

Note that the Ansible script will report an invalid configuration if you wind up
with an *even number* of etcd instances since that is not a valid configuration. This
restriction includes standalone etcd nodes that are deployed in a cluster along with
master nodes with etcd replicas. As an example, if you have three master nodes with
master nodes with etcd replicas. As an example, if you have three control plane nodes with
etcd replicas and three standalone etcd nodes, the script will fail since there are
now six total etcd replicas.

Expand Down
2 changes: 1 addition & 1 deletion contrib/terraform/equinix/sample-inventory/cluster.tfvars
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ number_of_etcd = 0

plan_etcd = "t1.small.x86"

# masters
# control planes
number_of_k8s_masters = 1

number_of_k8s_masters_no_etcd = 0
Expand Down
2 changes: 1 addition & 1 deletion contrib/terraform/gcp/generate-inventory.sh
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ mapfile -t WORKER_NAMES < <(jq -r '.key' <(echo "${WORKERS}"))

API_LB=$(jq -r '.control_plane_lb_ip_address.value' <(echo "${TF_OUT}"))

# Generate master hosts
# Generate control plane hosts
i=1
for name in "${MASTER_NAMES[@]}"; do
private_ip=$(jq -r '. | select( .key=='"\"${name}\""' ) | .value.private_ip' <(echo "${MASTERS}"))
Expand Down
24 changes: 12 additions & 12 deletions contrib/terraform/openstack/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,15 +60,15 @@ You can create many different kubernetes topologies by setting the number of
different classes of hosts. For each class there are options for allocating
floating IP addresses or not.

- Master nodes with etcd
- Master nodes without etcd
- Control Plane nodes with etcd
- Control Plane nodes without etcd
- Standalone etcd hosts
- Kubernetes worker nodes

Note that the Ansible script will report an invalid configuration if you wind up
with an even number of etcd instances since that is not a valid configuration. This
restriction includes standalone etcd nodes that are deployed in a cluster along with
master nodes with etcd replicas. As an example, if you have three master nodes with
control plane nodes with etcd replicas. As an example, if you have three control plane nodes with
etcd replicas and three standalone etcd nodes, the script will fail since there are
now six total etcd replicas.

Expand Down Expand Up @@ -254,15 +254,15 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|`network_dns_domain` | (Optional) The dns_domain for the internal network that will be generated |
|`dns_nameservers`| An array of DNS name server names to be used by hosts in the internal subnet. |
|`floatingip_pool` | Name of the pool from which floating IPs will be allocated |
|`k8s_master_fips` | A list of floating IPs that you have already pre-allocated; they will be attached to master nodes instead of creating new random floating IPs. |
|`k8s_master_fips` | A list of floating IPs that you have already pre-allocated; they will be attached to control plane nodes instead of creating new random floating IPs. |
|`bastion_fips` | A list of floating IPs that you have already pre-allocated; they will be attached to bastion node instead of creating new random floating IPs. |
|`external_net` | UUID of the external network that will be routed to |
|`flavor_k8s_master`,`flavor_k8s_node`,`flavor_etcd`, `flavor_bastion`,`flavor_gfs_node` | Flavor depends on your openstack installation, you can get available flavor IDs through `openstack flavor list` |
|`image`,`image_gfs` | Name of the image to use in provisioning the compute resources. Should already be loaded into glance. |
|`ssh_user`,`ssh_user_gfs` | The username to ssh into the image with. This usually depends on the image you have selected |
|`public_key_path` | Path on your local workstation to the public key file you wish to use in creating the key pairs |
|`number_of_k8s_masters`, `number_of_k8s_masters_no_floating_ip` | Number of nodes that serve as both master and etcd. These can be provisioned with or without floating IP addresses|
|`number_of_k8s_masters_no_etcd`, `number_of_k8s_masters_no_floating_ip_no_etcd` | Number of nodes that serve as just master with no etcd. These can be provisioned with or without floating IP addresses |
|`number_of_k8s_masters`, `number_of_k8s_masters_no_floating_ip` | Number of nodes that serve as both control plane and etcd. These can be provisioned with or without floating IP addresses|
|`number_of_k8s_masters_no_etcd`, `number_of_k8s_masters_no_floating_ip_no_etcd` | Number of nodes that serve as just control plane with no etcd. These can be provisioned with or without floating IP addresses |
|`number_of_etcd` | Number of pure etcd nodes |
|`number_of_k8s_nodes`, `number_of_k8s_nodes_no_floating_ip` | Kubernetes worker nodes. These can be provisioned with or without floating ip addresses. |
|`number_of_bastions` | Number of bastion hosts to create. Scripts assume this is really just zero or one |
Expand All @@ -281,25 +281,25 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|`k8s_allowed_egress_ipv6_ips` | List of IPv6 CIDRs allowed for egress traffic, `["::/0"]` by default |
|`worker_allowed_ports` | List of ports to open on worker nodes, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "0.0.0.0/0"}]` by default |
|`worker_allowed_ports_ipv6` | List of ports to open on worker nodes for IPv6 CIDR blocks, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "::/0"}]` by default |
|`master_allowed_ports` | List of ports to open on master nodes, expected format is `[{ "protocol" = "tcp", "port_range_min" = 443, "port_range_max" = 443, "remote_ip_prefix" = "0.0.0.0/0"}]`, empty by default |
|`master_allowed_ports_ipv6` | List of ports to open on master nodes for IPv6 CIDR blocks, expected format is `[{ "protocol" = "tcp", "port_range_min" = 443, "port_range_max" = 443, "remote_ip_prefix" = "::/0"}]`, empty by default |
|`master_allowed_ports` | List of ports to open on control plane nodes, expected format is `[{ "protocol" = "tcp", "port_range_min" = 443, "port_range_max" = 443, "remote_ip_prefix" = "0.0.0.0/0"}]`, empty by default |
|`master_allowed_ports_ipv6` | List of ports to open on control plane nodes for IPv6 CIDR blocks, expected format is `[{ "protocol" = "tcp", "port_range_min" = 443, "port_range_max" = 443, "remote_ip_prefix" = "::/0"}]`, empty by default |
|`node_root_volume_size_in_gb` | Size of the root volume for nodes, 0 to use ephemeral storage |
|`master_root_volume_size_in_gb` | Size of the root volume for masters, 0 to use ephemeral storage |
|`master_volume_type` | Volume type of the root volume for control_plane, 'Default' by default |
|`node_volume_type` | Volume type of the root volume for nodes, 'Default' by default |
|`gfs_root_volume_size_in_gb` | Size of the root volume for gluster, 0 to use ephemeral storage |
|`etcd_root_volume_size_in_gb` | Size of the root volume for etcd nodes, 0 to use ephemeral storage |
|`bastion_root_volume_size_in_gb` | Size of the root volume for bastions, 0 to use ephemeral storage |
|`master_server_group_policy` | Enable and use openstack nova servergroups for masters with set policy, default: "" (disabled) |
|`master_server_group_policy` | Enable and use openstack nova servergroups for control planes with set policy, default: "" (disabled) |
|`node_server_group_policy` | Enable and use openstack nova servergroups for nodes with set policy, default: "" (disabled) |
|`etcd_server_group_policy` | Enable and use openstack nova servergroups for etcd with set policy, default: "" (disabled) |
|`additional_server_groups` | Extra server groups to create. Set "policy" to the policy for the group, expected format is `{"new-server-group" = {"policy" = "anti-affinity"}}`, default: {} (to not create any extra groups) |
|`use_access_ip` | If 1, nodes with floating IPs will transmit internal cluster traffic via floating IPs; if 0 private IPs will be used instead. Default value is 1. |
|`port_security_enabled` | Allow to disable port security by setting this to `false`. `true` by default |
|`force_null_port_security` | Set `null` instead of `true` or `false` for `port_security`. `false` by default |
|`k8s_nodes` | Map containing worker node definition, see explanation below |
|`k8s_masters` | Map containing master node definition, see explanation for k8s_nodes and `sample-inventory/cluster.tfvars` |
| `k8s_master_loadbalancer_enabled`| Enable and use an Octavia load balancer for the K8s master nodes |
|`k8s_masters` | Map containing control plane node definition, see explanation for k8s_nodes and `sample-inventory/cluster.tfvars` |
| `k8s_master_loadbalancer_enabled`| Enable and use an Octavia load balancer for the K8s control plane nodes |
| `k8s_master_loadbalancer_listener_port` | Define via which port the K8s Api should be exposed. `6443` by default |
| `k8s_master_loadbalancer_server_port` | Define via which port the K8S api is available on the mas. `6443` by default |
| `k8s_master_loadbalancer_public_ip` | Specify if an existing floating IP should be used for the load balancer. A new floating IP is assigned by default |
Expand Down Expand Up @@ -656,7 +656,7 @@ This will take some time as there are many tasks to run.
### Set up kubectl

1. [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your workstation
2. Add a route to the internal IP of a master node (if needed):
2. Add a route to the internal IP of a control plane node (if needed):

```ShellSession
sudo route add [master-internal-ip] gw [router-ip]
Expand Down
4 changes: 2 additions & 2 deletions contrib/terraform/openstack/modules/compute/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ resource "openstack_networking_secgroup_v2" "k8s_master" {
resource "openstack_networking_secgroup_v2" "k8s_master_extra" {
count = "%{if var.extra_sec_groups}1%{else}0%{endif}"
name = "${var.cluster_name}-k8s-master-${var.extra_sec_groups_name}"
description = "${var.cluster_name} - Kubernetes Master nodes - rules not managed by terraform"
description = "${var.cluster_name} - Kubernetes Control Plane nodes - rules not managed by terraform"
delete_default_rules = true
}

Expand Down Expand Up @@ -269,7 +269,7 @@ resource "openstack_compute_servergroup_v2" "k8s_node_additional" {
}

locals {
# master groups
# control plane groups
master_sec_groups = compact([
openstack_networking_secgroup_v2.k8s_master.id,
openstack_networking_secgroup_v2.k8s.id,
Expand Down
4 changes: 2 additions & 2 deletions contrib/terraform/openstack/sample-inventory/cluster.tfvars
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ cluster_name = "i-didnt-read-the-docs"
# SSH key to use for access to nodes
public_key_path = "~/.ssh/id_rsa.pub"

# image to use for bastion, masters, standalone etcd instances, and nodes
# image to use for bastion, control planes, standalone etcd instances, and nodes
image = "<image name>"

# user on the node (ex. core on Container Linux, ubuntu on Ubuntu, etc.)
Expand All @@ -21,7 +21,7 @@ number_of_bastions = 0
# standalone etcds
number_of_etcd = 0

# masters
# control planes
number_of_k8s_masters = 1

number_of_k8s_masters_no_etcd = 0
Expand Down
6 changes: 3 additions & 3 deletions contrib/terraform/openstack/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ variable "cluster_name" {
}

variable "az_list" {
description = "List of Availability Zones to use for masters in your OpenStack cluster"
description = "List of Availability Zones to use for control planes in your OpenStack cluster"
type = list(string)
default = ["nova"]
}
Expand Down Expand Up @@ -179,7 +179,7 @@ variable "dns_nameservers" {
}

variable "k8s_master_fips" {
description = "specific pre-existing floating IPs to use for master nodes"
description = "specific pre-existing floating IPs to use for control plane nodes"
type = list(string)
default = []
}
Expand Down Expand Up @@ -380,7 +380,7 @@ variable "image_master" {
}

variable "image_master_uuid" {
description = "uuid of image to be used on master nodes. If empty defaults to image_uuid"
description = "uuid of image to be used on control plane nodes. If empty defaults to image_uuid"
default = ""
}

Expand Down
4 changes: 2 additions & 2 deletions contrib/terraform/upcloud/modules/kubernetes-cluster/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ resource "upcloud_firewall_rules" "master" {

content {
action = "accept"
comment = "Allow master API access from this network"
comment = "Allow control plane API access from this network"
destination_port_end = "6443"
destination_port_start = "6443"
direction = "in"
Expand All @@ -189,7 +189,7 @@ resource "upcloud_firewall_rules" "master" {

content {
action = "drop"
comment = "Deny master API access from other networks"
comment = "Deny control plane API access from other networks"
destination_port_end = "6443"
destination_port_start = "6443"
direction = "in"
Expand Down
6 changes: 3 additions & 3 deletions contrib/terraform/vsphere/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,9 +116,9 @@ ansible-playbook -i inventory.ini ../../cluster.yml -b -v
* `dns_secondary`: The IP address of secondary DNS server (default: `8.8.8.8`)
* `firmware`: Firmware to use (default: `bios`)
* `hardware_version`: The version of the hardware (default: `15`)
* `master_cores`: The number of CPU cores for the master nodes (default: 4)
* `master_memory`: The amount of RAM for the master nodes in MB (default: 4096)
* `master_disk_size`: The amount of disk space for the master nodes in GB (default: 20)
* `master_cores`: The number of CPU cores for the control plane nodes (default: 4)
* `master_memory`: The amount of RAM for the control plane nodes in MB (default: 4096)
* `master_disk_size`: The amount of disk space for the control plane nodes in GB (default: 20)
* `worker_cores`: The number of CPU cores for the worker nodes (default: 16)
* `worker_memory`: The amount of RAM for the worker nodes in MB (default: 8192)
* `worker_disk_size`: The amount of disk space for the worker nodes in GB (default: 100)
Expand Down
2 changes: 1 addition & 1 deletion contrib/terraform/vsphere/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ module "kubernetes" {

machines = var.machines

## Master ##
## Control Plane ##
master_cores = var.master_cores
master_memory = var.master_memory
master_disk_size = var.master_disk_size
Expand Down
Loading