Skip to content

Latest commit

 

History

History
371 lines (246 loc) · 20.9 KB

cluster-tasks.adoc

File metadata and controls

371 lines (246 loc) · 20.9 KB

Postinstallation cluster tasks

After installing {product-title}, you can further expand and customize your cluster to your requirements.

Available cluster customizations

You complete most of the cluster configuration and customization after you deploy your {product-title} cluster. A number of configuration resources are available.

Note

If you install your cluster on {ibm-z-name}, not all features and functions are available.

You modify the configuration resources to configure the major features of the cluster, such as the image registry, networking configuration, image build behavior, and the identity provider.

For current documentation of the settings that you control by using these resources, use the oc explain command, for example oc explain builds --api-version=config.openshift.io/v1

Cluster configuration resources

All cluster configuration resources are globally scoped (not namespaced) and named cluster.

Resource name Description

apiserver.config.openshift.io

Provides API server configuration such as certificates and certificate authorities.

authentication.config.openshift.io

Controls the identity provider and authentication configuration for the cluster.

build.config.openshift.io

Controls default and enforced configuration for all builds on the cluster.

console.config.openshift.io

Configures the behavior of the web console interface, including the logout behavior.

featuregate.config.openshift.io

Enables FeatureGates so that you can use Tech Preview features.

image.config.openshift.io

Configures how specific image registries should be treated (allowed, disallowed, insecure, CA details).

ingress.config.openshift.io

Configuration details related to routing such as the default domain for routes.

oauth.config.openshift.io

Configures identity providers and other behavior related to internal OAuth server flows.

project.config.openshift.io

Configures how projects are created including the project template.

proxy.config.openshift.io

Defines proxies to be used by components needing external network access. Note: not all components currently consume this value.

scheduler.config.openshift.io

Configures scheduler behavior such as profiles and default node selectors.

Operator configuration resources

These configuration resources are cluster-scoped instances, named cluster, which control the behavior of a specific component as owned by a particular Operator.

Resource name Description

consoles.operator.openshift.io

Controls console appearance such as branding customizations

config.imageregistry.operator.openshift.io

Configures {product-registry} settings such as public routing, log levels, proxy settings, resource constraints, replica counts, and storage type.

config.samples.operator.openshift.io

Configures the Samples Operator to control which example image streams and templates are installed on the cluster.

Additional configuration resources

These configuration resources represent a single instance of a particular component. In some cases, you can request multiple instances by creating multiple instances of the resource. In other cases, the Operator can use only a specific resource instance name in a specific namespace. Reference the component-specific documentation for details on how and when you can create additional resource instances.

Resource name Instance name Namespace Description

alertmanager.monitoring.coreos.com

main

openshift-monitoring

Controls the Alertmanager deployment parameters.

ingresscontroller.operator.openshift.io

default

openshift-ingress-operator

Configures Ingress Operator behavior such as domain, number of replicas, certificates, and controller placement.

Informational Resources

You use these resources to retrieve information about the cluster. Some configurations might require you to edit these resources directly.

Resource name Instance name Description

clusterversion.config.openshift.io

version

In {product-title} {product-version}, you must not customize the ClusterVersion resource for production clusters. Instead, follow the process to update a cluster.

dns.config.openshift.io

cluster

You cannot modify the DNS settings for your cluster. You can check the DNS Operator status.

infrastructure.config.openshift.io

cluster

Configuration details allowing the cluster to interact with its cloud provider.

network.config.openshift.io

cluster

You cannot modify your cluster networking after installation. To customize your network, follow the process to customize networking during installation.

Adding worker nodes

After you deploy your {product-title} cluster, you can add worker nodes to scale cluster resources. There are different ways you can add worker nodes depending on the installation method and the environment of your cluster.

Adding worker nodes to an on-premise cluster

For on-premise clusters, you can add worker nodes by using the {product-title} CLI (oc) to generate an ISO image, which can then be used to boot one or more nodes in your target cluster. This process can be used regardless of how you installed your cluster.

You can add one or more nodes at a time while customizing each node with more complex configurations, such as static network configuration, or you can specify only the MAC address of each node. Any configurations that are not specified during ISO generation are retrieved from the target cluster and applied to the new nodes.

Preflight validation checks are also performed when booting the ISO image to inform you of failure-causing issues before you attempt to boot each node.

Adding worker nodes to installer-provisioned infrastructure clusters

For installer-provisioned infrastructure clusters, you can manually or automatically scale the MachineSet object to match the number of available bare-metal hosts.

To add a bare-metal host, you must configure all network prerequisites, configure an associated baremetalhost object, then provision the worker node to the cluster. You can add a bare-metal host manually or by using the web console.

Adding worker nodes to user-provisioned infrastructure clusters

For user-provisioned infrastructure clusters, you can add worker nodes by using a {op-system-base} or {op-system} ISO image and connecting it to your cluster using cluster Ignition config files. For RHEL worker nodes, the following example uses Ansible playbooks to add worker nodes to the cluster. For RHCOS worker nodes, the following example uses an ISO image and network booting to add worker nodes to the cluster.

Adding worker nodes to clusters managed by the Assisted Installer

For clusters managed by the Assisted Installer, you can add worker nodes by using the {cluster-manager-first} console, the Assisted Installer REST API or you can manually add worker nodes using an ISO image and cluster Ignition config files.

Adding worker nodes to clusters managed by the multicluster engine for Kubernetes

For clusters managed by the multicluster engine for Kubernetes, you can add worker nodes by using the dedicated multicluster engine console.

Adjust worker nodes

If you incorrectly sized the worker nodes during deployment, adjust them by creating one or more new compute machine sets, scale them up, then scale the original compute machine set down before removing them.

Managing control plane machines

Control plane machine sets provide management capabilities for control plane machines that are similar to what compute machine sets provide for compute machines. The availability and initial status of control plane machine sets on your cluster depend on your cloud provider and the version of {product-title} that you installed. For more information, see Getting started with control plane machine sets.

Creating infrastructure machine sets for production environments

You can create a compute machine set to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment.

In a production deployment, it is recommended that you deploy at least three compute machine sets to hold infrastructure components. Both OpenShift Logging and {SMProductName} deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different compute machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.

For information on infrastructure nodes and which components can run on infrastructure nodes, see Creating infrastructure machine sets.

To create an infrastructure node, you can use a machine set, assign a label to the nodes, or use a machine config pool.

For sample machine sets that you can use with these procedures, see Creating machine sets for different clouds.

Applying a specific node selector to all infrastructure components causes {product-title} to schedule those workloads on nodes with that label.

Additional resources
  • For information on how to configure project node selectors to avoid cluster-wide node selector key conflicts, see Project node selectors.

Additional resources

Assigning machine set resources to infrastructure nodes

After creating an infrastructure machine set, the worker and infra roles are applied to new infra nodes. Nodes with the infra role are not counted toward the total number of subscriptions that are required to run the environment, even when the worker role is also applied.

However, when an infra node is assigned the worker role, there is a chance that user workloads can get assigned inadvertently to the infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods that you want to control.

Additional resources

Moving resources to infrastructure machine sets

Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created.

Enabling Technology Preview features using FeatureGates

You can turn on a subset of the current Technology Preview features on for all nodes in the cluster by editing the FeatureGate custom resource (CR).