After installing {product-title}, you can further expand and customize your cluster to your requirements.
You complete most of the cluster configuration and customization after you deploy your {product-title} cluster. A number of configuration resources are available.
Note
|
If you install your cluster on {ibm-z-name}, not all features and functions are available. |
You modify the configuration resources to configure the major features of the cluster, such as the image registry, networking configuration, image build behavior, and the identity provider.
For current documentation of the settings that you control by using these resources, use
the oc explain
command, for example oc explain builds --api-version=config.openshift.io/v1
All cluster configuration resources are globally scoped (not namespaced) and named cluster
.
Resource name | Description |
---|---|
|
Provides API server configuration such as certificates and certificate authorities. |
|
Controls the identity provider and authentication configuration for the cluster. |
|
Controls default and enforced configuration for all builds on the cluster. |
|
Configures the behavior of the web console interface, including the logout behavior. |
|
Enables FeatureGates so that you can use Tech Preview features. |
|
Configures how specific image registries should be treated (allowed, disallowed, insecure, CA details). |
|
Configuration details related to routing such as the default domain for routes. |
|
Configures identity providers and other behavior related to internal OAuth server flows. |
|
Configures how projects are created including the project template. |
|
Defines proxies to be used by components needing external network access. Note: not all components currently consume this value. |
|
Configures scheduler behavior such as profiles and default node selectors. |
These configuration resources are cluster-scoped instances, named cluster
, which control the behavior of a specific component as
owned by a particular Operator.
Resource name | Description |
---|---|
|
Controls console appearance such as branding customizations |
|
Configures {product-registry} settings such as public routing, log levels, proxy settings, resource constraints, replica counts, and storage type. |
|
Configures the Samples Operator to control which example image streams and templates are installed on the cluster. |
These configuration resources represent a single instance of a particular component. In some cases, you can request multiple instances by creating multiple instances of the resource. In other cases, the Operator can use only a specific resource instance name in a specific namespace. Reference the component-specific documentation for details on how and when you can create additional resource instances.
Resource name | Instance name | Namespace | Description |
---|---|---|---|
|
|
|
Controls the Alertmanager deployment parameters. |
|
|
|
Configures Ingress Operator behavior such as domain, number of replicas, certificates, and controller placement. |
You use these resources to retrieve information about the cluster. Some configurations might require you to edit these resources directly.
Resource name | Instance name | Description |
---|---|---|
|
|
In {product-title} {product-version}, you must not customize the |
|
|
You cannot modify the DNS settings for your cluster. You can check the DNS Operator status. |
|
|
Configuration details allowing the cluster to interact with its cloud provider. |
|
|
You cannot modify your cluster networking after installation. To customize your network, follow the process to customize networking during installation. |
After you deploy your {product-title} cluster, you can add worker nodes to scale cluster resources. There are different ways you can add worker nodes depending on the installation method and the environment of your cluster.
For on-premise clusters, you can add worker nodes by using the {product-title} CLI (oc
) to generate an ISO image, which can then be used to boot one or more nodes in your target cluster.
This process can be used regardless of how you installed your cluster.
You can add one or more nodes at a time while customizing each node with more complex configurations, such as static network configuration, or you can specify only the MAC address of each node. Any configurations that are not specified during ISO generation are retrieved from the target cluster and applied to the new nodes.
Preflight validation checks are also performed when booting the ISO image to inform you of failure-causing issues before you attempt to boot each node.
For installer-provisioned infrastructure clusters, you can manually or automatically scale the MachineSet
object to match the number of available bare-metal hosts.
To add a bare-metal host, you must configure all network prerequisites, configure an associated baremetalhost
object, then provision the worker node to the cluster. You can add a bare-metal host manually or by using the web console.
For user-provisioned infrastructure clusters, you can add worker nodes by using a {op-system-base} or {op-system} ISO image and connecting it to your cluster using cluster Ignition config files. For RHEL worker nodes, the following example uses Ansible playbooks to add worker nodes to the cluster. For RHCOS worker nodes, the following example uses an ISO image and network booting to add worker nodes to the cluster.
For clusters managed by the Assisted Installer, you can add worker nodes by using the {cluster-manager-first} console, the Assisted Installer REST API or you can manually add worker nodes using an ISO image and cluster Ignition config files.
For clusters managed by the multicluster engine for Kubernetes, you can add worker nodes by using the dedicated multicluster engine console.
If you incorrectly sized the worker nodes during deployment, adjust them by creating one or more new compute machine sets, scale them up, then scale the original compute machine set down before removing them.
Control plane machine sets provide management capabilities for control plane machines that are similar to what compute machine sets provide for compute machines. The availability and initial status of control plane machine sets on your cluster depend on your cloud provider and the version of {product-title} that you installed. For more information, see Getting started with control plane machine sets.
You can create a compute machine set to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment.
In a production deployment, it is recommended that you deploy at least three compute machine sets to hold infrastructure components. Both OpenShift Logging and {SMProductName} deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different compute machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.
For information on infrastructure nodes and which components can run on infrastructure nodes, see Creating infrastructure machine sets.
To create an infrastructure node, you can use a machine set, assign a label to the nodes, or use a machine config pool.
For sample machine sets that you can use with these procedures, see Creating machine sets for different clouds.
Applying a specific node selector to all infrastructure components causes {product-title} to schedule those workloads on nodes with that label.
-
For information on how to configure project node selectors to avoid cluster-wide node selector key conflicts, see Project node selectors.
-
See Node configuration management with machine config pools for more information on grouping infra machines in a custom pool.
After creating an infrastructure machine set, the worker
and infra
roles are applied to new infra nodes. Nodes with the infra
role are not counted toward the total number of subscriptions that are required to run the environment, even when the worker
role is also applied.
However, when an infra node is assigned the worker role, there is a chance that user workloads can get assigned inadvertently to the infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods that you want to control.
-
See Controlling pod placement using the scheduler for general information on scheduling a pod to a node.
Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created.
For information about moving {logging} resources, see:
modules/cluster-autoscaler-about.adoc modules/cluster-autoscaler-cr.adoc :FeatureName: cluster autoscaler :FeatureResourceName: ClusterAutoscaler modules/deploying-resource.adoc
modules/machine-autoscaler-about.adoc modules/machine-autoscaler-cr.adoc :FeatureName: machine autoscaler :FeatureResourceName: MachineAutoscaler modules/deploying-resource.adoc
You can turn on a subset of the current Technology Preview features on for all nodes in the cluster by editing the FeatureGate
custom resource (CR).
Back up etcd, enable or disable etcd encryption, or defragment etcd data.
Note
|
If you deployed a bare-metal cluster, you can scale the cluster up to 5 nodes as part of your post-installation tasks. For more information, see Node scaling for etcd. |
modules/about-etcd-encryption.adoc modules/etcd-encryption-types.adoc modules/enabling-etcd-encryption.adoc modules/disabling-etcd-encryption.adoc modules/backup-etcd.adoc modules/etcd-defrag.adoc modules/dr-restoring-cluster-state.adoc
Understand and configure pod disruption budgets.
modules/nodes-pods-pod-disruption-about.adoc modules/nodes-pods-pod-disruption-configuring.adoc modules/pod-disruption-eviction-policy.adoc
-
Unhealthy Pod Eviction Policy in the Kubernetes documentation