diff --git a/docs/EN_US/ContainerizedHPCC/ContainerizedMods/ConfigureValues.xml b/docs/EN_US/ContainerizedHPCC/ContainerizedMods/ConfigureValues.xml
index 514b694dbfa..ba8f4445047 100644
--- a/docs/EN_US/ContainerizedHPCC/ContainerizedMods/ConfigureValues.xml
+++ b/docs/EN_US/ContainerizedHPCC/ContainerizedMods/ConfigureValues.xml
@@ -1214,8 +1214,8 @@ thor:
needs.
You can deploy these values either using the values.yaml file or you
- can place into an file and have Kubernetes instead read the values from
- the supplied file. See the above section Customization
+ can place into a file and have Kubernetes instead read the values from the
+ supplied file. See the above section Customization
Techniques for more information about customizing your
deployment.
@@ -1260,45 +1260,63 @@ thor:
The pods: [list] item can contain one of the following:
-
-
- Type: <component> Covers all pods/jobs under this
- type of component. This is commonly used for HPCC Systems
- components. For example, the type:thor
- which will apply to any of the Thor type components; thoragent,
- thormanager, thoragent and thorworker, etc.
-
-
-
- Target: <name> The "name" field of each component,
- typical usage for HPCC Systems components referrs to the cluster
- name. For example Roxie: -name: roxie which
- will be the "Roxie" target (cluster). You can also define
- multiple targets with each having a unique name such as "roxie",
- "roxie2", "roxie-web" etc
-
-
-
- Pod: This is the "Deployment" metadata name from the name
- of the array item of a type. For example, "eclwatch-",
- "mydali-", "thor-thoragent" This can be a regular expression
- since Kubernetes will use the metadata name as a prefix and
- dynamically generate the pod name such as,
- eclwatch-7f4dd4dd44cb-c0w3x.
-
-
-
- Job name: The job name is typically a regular expression
- as well, since the job name is generated dynamically. For
- example, a compile job compile-54eB67e567e, could use "compile-"
- or "compile-*" or the exact match "^compile-.$"
-
-
-
- All: applies for all HPCC Systems components. The default
- placements for pods delivered is [all]
-
-
+
+
+
+
+
+
+
+
+ Type: <component>
+
+ Covers all pods/jobs under this type of component. This
+ is commonly used for HPCC Systems components. For example, the
+ type:thor which will apply to any of the
+ Thor type components; thoragent, thormanager, thoragent and
+ thorworker, etc.
+
+
+
+ Target: <name>
+
+ The "name" field of each component, typical usage for
+ HPCC Systems components referrs to the cluster name. For
+ example Roxie: -name: roxie which will be
+ the "Roxie" target (cluster). You can also define multiple
+ targets with each having a unique name such as "roxie",
+ "roxie2", "roxie-web" etc.
+
+
+
+ Pod: <name>
+
+ This is the "Deployment" metadata name from the name of
+ the array item of a type. For example, "eclwatch-", "mydali-",
+ "thor-thoragent" This can be a regular expression since
+ Kubernetes will use the metadata name as a prefix and
+ dynamically generate the pod name such as,
+ eclwatch-7f4dd4dd44cb-c0w3x.
+
+
+
+ Job name:
+
+ The job name is typically a regular expression as well,
+ since the job name is generated dynamically. For example, a
+ compile job compile-54eB67e567e, could use "compile-" or
+ "compile-*" or the exact match "^compile-.$"
+
+
+
+ All:
+
+ Applies for all HPCC Systems components. The default
+ placements for pods delivered is [all]
+
+
+
+
Regardless of the order the placements appear in the
configuration, they will be processed in the following order: "all",
@@ -1321,7 +1339,7 @@ thor:
Node Selection
- In a Kubernetes container environment there are several ways to
+ In a Kubernetes container environment, there are several ways to
schedule your nodes. The recommended approaches all use label selectors
to facilitate the selection. Generally, you may not need to set such
constraints; as the scheduler usually does reasonably acceptable
@@ -1393,8 +1411,8 @@ thor:
nodeSelector:
group: "hpcc"
- Note:the label: group:hpcc matches the node pool label:
- "hpcc".
+ Note: The label group:hpcc
+ matches the node pool label:hpcc.
This next example shows how to configure a node pool to prevent
scheduling a Dali component onto this node pool labelled with the key
@@ -1418,9 +1436,9 @@ thor:
Taints and Tolerations
Taints and Tolerations are types of Kubernetes node constraints
- also referred to by Node Affinity. Only one "affinity" can be applied
- to a pod. If a pod matches multiple placement 'pods' lists, then only
- the last "affinity" definition will apply.
+ also referred to by node Affinity. Only one affinity can be applied to
+ a pod. If a pod matches multiple placement 'pods' lists, then only the
+ last affinity definition will apply.
Taints and tolerations work together to ensure that pods are not
scheduled onto inappropriate nodes. Tolerations are applied to pods,
@@ -1524,7 +1542,8 @@ thor:
respectively. The Roxie pods will be evenly scheduled on the two node
pools.
- After deployment you can verify by issuing the following:
+ After deployment you can verify by issuing the following
+ command:
kubectl get pod -o wide | grep roxie
@@ -1570,7 +1589,7 @@ thor:
There is no schema check for the content of affinity. Only one
affinity can be applied to a pod or job. If a pod/job matches
- multiple placement 'pods' lists, then only the last affinity
+ multiple placement pods lists, then only the last affinity
definition applies.
For more information, see Only one "schedulerName" can be applied to any pod/job.
- A SchedulerName example:
+ A schedulerName example:
- pods: ["target:roxie"]
placement: