Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatically rescale to recover from pod deletion #717

Closed
wants to merge 12 commits into from
7 changes: 7 additions & 0 deletions dask_kubernetes/operator/controller/controller.py
Original file line number Diff line number Diff line change
Expand Up @@ -504,7 +504,14 @@ async def daskcluster_default_worker_group_replica_update(
)


def resource_is_deleted(event, **_):
return event["type"] == "DELETED"


@kopf.on.field("daskworkergroup.kubernetes.dask.org", field="spec.worker.replicas")
@kopf.on.event(
kind="pod", when=resource_is_deleted, labels={"dask.org/component": "worker"}
)
Comment on lines +512 to +514
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think with this decorator the spec that will be passed to daskworkergroup_replica_update will be the spec for the Pod, not the DaskWorkerGroup.

I think we probably need to put this on a separate function that gets the spec for the worker group and then calls daskworkergroup_replica_update.

async def daskworkergroup_replica_update(
name, namespace, meta, spec, new, body, logger, **kwargs
):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -189,6 +189,14 @@ async def test_scalesimplecluster_from_cluster_spec(
# argument to wait when removing workers once distributed
# PR github.com/dask/distributed/pull/6377 is merged.
await client.wait_for_workers(3)
k8s_cluster.kubectl(
"delete",
"pod",
"-l",
"dask.org/component=worker",
)
assert worker_pod_name not in k8s_cluster.kubectl("get", "pods")
await client.wait_for_workers(3) # recovery


@pytest.mark.timeout(180)
Expand Down