Skip to content

Commit

Permalink
Address comments
Browse files Browse the repository at this point in the history
  • Loading branch information
pipo02mix committed Dec 13, 2024
2 parents ef1e42a + 6e121a3 commit 1fd2773
Show file tree
Hide file tree
Showing 2 changed files with 14 additions and 20 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -25,43 +25,42 @@ To extend the configuration, you need to override these defaults using a `Config

The following examples assume the cluster you are trying to configure has an id of `myclustername`.

You will find the `myclustername-cluster-autoscaler-user-values` `ConfigMap` on the organization namespace of your cluster using the platform API:
You will find the `ConfigMap` named `myclustername-cluster-autoscaler-user-values` in the organization namespace of your cluster:

```text
$ kubectl -n org-company get cm myclustername-cluster-autoscaler-user-values
NAME DATA AGE
myclustername-cluster-autoscaler-user-values 0 11m
```

On cluster creation the user values `ConfigMap` is empty (or might not exist yet) and the following defaults will be applied to the final `cluster-autoscaler` deployment. To customize any of the configuration options, you just need to add the respective lines in the data field of the user `ConfigMap`.

## How to set configuration options using the user values ConfigMap

On the platform API, create or edit a `ConfigMap` named `myclustername-cluster-autoscaler-user-values` in the workload cluster namespace:
On the platform API, create or edit a ConfigMap named `myclustername-cluster-autoscaler-user-values`
in the workload cluster namespace:

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: cluster-autoscaler
name: abc12-cluster-autoscaler-user-values
name: myclustername-cluster-autoscaler-user-values
namespace: myorg
data:
values: |
configmap:
scaleDownUtilizationThreshold: 0.30
```
## Configuration Reference
## Configuration reference
The following sections explain some of the configuration options and what their defaults are. They show only the `data` field of the `ConfigMap` for brevity.
The following sections explain some of the configuration options and what their defaults are. They show only the `data` field of the ConfigMap for brevity.

The most recent source of truth for these values can be found in the [values.yaml](https://github.com/giantswarm/cluster-autoscaler-app/blob/v1.30.3-gs1/helm/cluster-autoscaler-app/values.yaml) file of the `cluster-autoscaler-app`.

### Scale down utilization threshold

The `scaleDownUtilizationThreshold` defines the proportion between requested resources and capacity, which under the value `cluster-autoscaler` will trigger the scaling down action.
The `scaleDownUtilizationThreshold` defines the proportion between requested resources and capacity. Once utilization drops below this value, cluster autoscaler will consider a node as removable.

Our default value is 70%, which means in order to scale down, one of the nodes has to have less utilization (CPU/memory) than this threshold. You can adjust this value to your needs as shown below:

Expand All @@ -72,9 +71,9 @@ data:
scaleDownUtilizationThreshold: 0.65
```

### Scan Interval
### Scan interval

Define what interval is used to review the state for taking a decision to scale up/down. Our default value is `30` seconds.
Defines what interval is used to review the state for taking a decision to scale up/down. Our default value is 10 seconds.

```yaml
data:
Expand All @@ -85,7 +84,7 @@ data:

### Skip system pods

By default, the `cluster-autoscaler` will never delete nodes which run pods of the `kube-system` namespace (except `daemonset` pods). It can be modified by setting following property to `"false"`.
By default, the cluster autoscaler will never delete nodes which run pods of the `kube-system` namespace (except `daemonset` pods). This rule can be deactivated by setting the following property to false.

```yaml
data:
Expand All @@ -96,7 +95,7 @@ data:

### Skip pods with local storage

The `cluster-autoscaler` configuration by default deletes nodes with pods using local storage (`hostPath` or `emptyDir`). In case you want to disable this action, you need to set the following property to `"true"`.
The cluster autoscaler by default deletes nodes with pods using local storage (`hostPath` or `emptyDir`). In case you want to protect these nodes from removal, you can to set the following property to true.

```yaml
data:
Expand All @@ -107,7 +106,7 @@ data:

### Balance similar node groups

The `cluster-autoscaler` configuration by default doesn't differentiate between node groups when scaling. In case you want to enable this action, you need to set the following property to `"true"`.
The cluster autoscaler by default doesn't differentiate between node groups when scaling. In case you want to enable considering node groups, you need to set the following property to true.

```yaml
data:
Expand All @@ -116,4 +115,4 @@ data:
balanceSimilarNodeGroups: "true"
```

Read [the `Kubernetes` autoscaler FAQ](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md) to learn more about the `cluster-autoscaler` and its configuration options.
Read [the Kubernetes autoscaler FAQ](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md) to learn more about the cluster autoscaler and its configuration options.
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,7 @@ description: Recommendations and best practices around cluster and node sizing a
weight: 50
menu:
main:
parent: support-training
aliases:
- /getting-started/sizing-multi-tenancy
- /getting-started/best-practices/
- /guides/recommendations-and-best-practices/
- /kubernetes/best-practices/
parent: getting-started
owner:
- https://github.com/orgs/giantswarm/teams/team-teddyfriends
user_questions:
Expand Down

0 comments on commit 1fd2773

Please sign in to comment.