Skip to content

Commit

Permalink
remove references to workers.celery.instances
Browse files Browse the repository at this point in the history
Signed-off-by: Mathew Wicks <[email protected]>
  • Loading branch information
thesuperzapper committed May 12, 2021
1 parent a10ee66 commit 68c17db
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 9 deletions.
6 changes: 4 additions & 2 deletions charts/airflow/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -624,6 +624,10 @@ For a worker pod you can calculate it: `WORKER_CONCURRENCY * 200Mi`, so for `10
In the following config if a worker consumes `80%` of `2Gi` (which will happen if it runs 9-10 tasks at the same time), an autoscaling event will be triggered, and a new worker will be added.
If you have many tasks in a queue, Kubernetes will keep adding workers until maxReplicas reached, in this case `16`.
```yaml
airflow:
config:
AIRFLOW__CELERY__WORKER_CONCURRENCY: 10
workers:
# the initial/minimum number of workers
replicas: 2
Expand All @@ -649,8 +653,6 @@ workers:
averageUtilization: 80
celery:
instances: 10
## wait at most 9min for running tasks to complete before SIGTERM
## WARNING:
## - some cloud cluster-autoscaler configs will not respect graceful termination
Expand Down
7 changes: 0 additions & 7 deletions charts/airflow/examples/google-gke/custom-values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -228,13 +228,6 @@ workers:
## configs for the celery worker Pods
##
celery:
## the number of tasks each celery worker can run at a time
##
## NOTE:
## - sets AIRFLOW__CELERY__WORKER_CONCURRENCY
##
instances: 10

## if we should wait for tasks to finish before SIGTERM of the celery worker
##
gracefullTermination: true
Expand Down

0 comments on commit 68c17db

Please sign in to comment.