Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adaptive schedule strategy for UnitedDeployment #1720

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

AiRanthem
Copy link
Contributor

@AiRanthem AiRanthem commented Sep 2, 2024

Ⅰ. Describe what this PR does

Added an adaptive scheduling strategy to UnitedDeployment. During scaling up, if a subset causes some Pods to be unschedulable for certain reasons, the unschedulable Pods will be rescheduled to other partitions. During scaling down, if elastic allocation is used (i.e., the subset is configured with min/max), each partition will retain the ready Pods as much as possible without exceeding the maximum capacity, rather than strictly scaling down in reverse order of the Subset list.

Ⅱ. Does this pull request fix one issue?

fixes #1673

Ⅲ. Describe how to verify it

Use the yaml below to create a UD with subset-b unschedulable.

apiVersion: apps.kruise.io/v1alpha1
kind: UnitedDeployment
metadata:
  name: sample-ud
spec:
  replicas: 5
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: sample
  template:
    deploymentTemplate:
      metadata:
        labels:
          app: sample
      spec:
        selector:
          matchLabels:
            app: sample
        template:
          metadata:
            labels:
              app: sample
          spec:
            terminationGracePeriodSeconds: 0
            containers:
              - name: nginx
                image: curlimages/curl:8.8.0
                command: ["/bin/sleep", "infinity"]
  topology:
    scheduleStrategy:
      type: Adaptive
      adaptive:
        rescheduleCriticalSeconds: 10
        unschedulableLastSeconds: 20

    subsets:
      - name: subset-a
        maxReplicas: 2
        nodeSelectorTerm:
          matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - ci-testing-worker
      - name: subset-b
        maxReplicas: 2
        nodeSelectorTerm:
          matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - not-exist
      - name: subset-c
        nodeSelectorTerm:
          matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - ci-testing-worker3
  1. when created, two pods in subset-b will stay pending
  2. after 10s, the two pending pods will be rescheduled to subset-c
  3. scale up immediately, new pods will be created in subset-c instead of subset-b (even not full)
  4. wait 20s, when subset-b is recovered, scale up again, 2 pods will be scheduled into subset-b again (and still pending)
  5. whenever you scale down:
    subset-c -> subset-b -> subset-a

Ⅳ. Special notes for reviews

  1. adapter.go: GetReplicaDetails returns pods in the subset
  2. xxx_adapter.go: return pods implementation ⬆️
  3. allocator.go: about safeReplica
  4. pod_condition_utils.go: extract PodUnscheduledTimeout function from workloadwpread
  5. reschedule.go: PodUnscheduledTimeout function extracted
  6. subset.go: add some field to Subset object to carry related information
  7. subset_control.go: store subset pods to Subset object
  8. uniteddeployment_controller.go
    1. add requeue feature to check failed pods
    2. subset unschedulable status management
  9. uniteddeployment_types.go: API change
  10. uniteddeployment_update.go: sync unschedulable to CR

@kruise-bot
Copy link

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign fei-guo for approval by writing /assign @fei-guo in a comment. For more information see:The Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link

codecov bot commented Sep 2, 2024

Codecov Report

Attention: Patch coverage is 49.30876% with 110 lines in your changes missing coverage. Please review.

Project coverage is 49.39%. Comparing base (0d0031a) to head (378185c).
Report is 94 commits behind head on master.

Files with missing lines Patch % Lines
.../util/expectations/resource_version_expectation.go 0.00% 23 Missing ⚠️
...er/uniteddeployment/uniteddeployment_controller.go 80.00% 10 Missing and 5 partials ⚠️
...deployment/adapter/advanced_statefulset_adapter.go 0.00% 13 Missing ⚠️
...oller/uniteddeployment/adapter/cloneset_adapter.go 0.00% 13 Missing ⚠️
...ler/uniteddeployment/adapter/deployment_adapter.go 0.00% 13 Missing ⚠️
...er/uniteddeployment/adapter/statefulset_adapter.go 0.00% 13 Missing ⚠️
...roller/uniteddeployment/uniteddeployment_update.go 0.00% 8 Missing and 1 partial ⚠️
pkg/controller/uniteddeployment/allocator.go 80.64% 4 Missing and 2 partials ⚠️
pkg/controller/uniteddeployment/subset_control.go 76.92% 2 Missing and 1 partial ⚠️
pkg/controller/workloadspread/reschedule.go 50.00% 1 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1720      +/-   ##
==========================================
+ Coverage   47.91%   49.39%   +1.48%     
==========================================
  Files         162      191      +29     
  Lines       23491    19728    -3763     
==========================================
- Hits        11256     9745    -1511     
+ Misses      11014     8719    -2295     
- Partials     1221     1264      +43     
Flag Coverage Δ
unittests 49.39% <49.30%> (+1.48%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@AiRanthem AiRanthem force-pushed the feature/ud-adaptive-240827 branch 2 times, most recently from e58f279 to 1cc7c87 Compare September 2, 2024 06:45
@zmberg zmberg added this to the 1.8 milestone Sep 3, 2024
@kruise-bot kruise-bot added size/XL size/XL: 500-999 and removed size/XXL labels Sep 3, 2024
@kruise-bot kruise-bot added size/XXL and removed size/XL size/XL: 500-999 labels Sep 4, 2024
@AiRanthem AiRanthem force-pushed the feature/ud-adaptive-240827 branch 2 times, most recently from cba277a to a50e39e Compare September 9, 2024 11:25
apis/apps/v1alpha1/uniteddeployment_types.go Outdated Show resolved Hide resolved
apis/apps/v1alpha1/uniteddeployment_types.go Outdated Show resolved Hide resolved
apis/apps/v1alpha1/uniteddeployment_types.go Outdated Show resolved Hide resolved
@AiRanthem AiRanthem force-pushed the feature/ud-adaptive-240827 branch 2 times, most recently from b37c453 to 300e67c Compare September 23, 2024 03:59
… adapter

1. adapter.go: GetReplicaDetails returns pods in the subset
2. xxx_adapter.go: return pods implementation ⬆️
3. allocator.go: about safeReplica
4. pod_condition_utils.go: extract PodUnscheduledTimeout function from workloadwpread
5. reschedule.go: PodUnscheduledTimeout function extracted
6. subset.go: add some field to Subset object to carry related information
7. subset_control.go: store subset pods to Subset object
8. uniteddeployment_controller.go
   1. add requeue feature to check failed pods
   2. subset unschedulable status management
9. uniteddeployment_types.go: API change
10. uniteddeployment_update.go: sync unschedulable to CR

Signed-off-by: AiRanthem <[email protected]>
Signed-off-by: AiRanthem <[email protected]>
numSubset := len(ac.Spec.Topology.Subsets)
minReplicasMap := make(map[string]int32, numSubset)
maxReplicasMap := make(map[string]int32, numSubset)
notPendingReplicasMap := getSubsetRunningReplicas(nameToSubset)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

plz change the local variable and logs e.g. notPendingReplicasMap, notPendingReplicas to runningReplicas...


if controllerutil.AddFinalizer(instance, UnitedDeploymentFinalizer) {
klog.InfoS("adding UnitedDeploymentFinalizer")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add uniteddeployment name

// to avoid memory leak
klog.InfoS("cleaning up UnitedDeployment", "unitedDeployment", request)
ResourceVersionExpectation.Delete(instance)
if err = r.updateUnitedDeploymentInstance(instance); err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the only clean-up work is to to delete expectation, it is not necessary to add a finalizer. we can just delete the expectation in the event handler. It save the works of add/del finalizer.

}

// make sure latest version is observed
ResourceVersionExpectation.Observe(instance)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can observe the instance in the update handler of event handler

@@ -124,3 +125,95 @@ func TestReconcile(t *testing.T) {
defer c.Delete(context.TODO(), instance)
g.Eventually(requests, timeout).Should(gomega.Receive(gomega.Equal(expectedRequest)))
}

func TestUnschedulableStatusManagement(t *testing.T) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rewrite the ut using sub test and without gomega

if err != nil {
r.recorder.Event(instance.DeepCopy(), corev1.EventTypeWarning, fmt.Sprintf("Failed%s", eventTypeDupSubsetsDelete), err.Error())
return nil, fmt.Errorf("fail to manage duplicate Subset of UnitedDeployment %s/%s: %s", instance.Namespace, instance.Name, err)
}

// If the Fixing scheduling strategy is used, the unschedulable state for all subsets remains false and
// the UnschedulableStatus of Subsets are not managed.
if instance.Spec.Topology.ScheduleStrategy.IsAdaptive() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

consider moving this code block to Reconcile, getNameToSubset seems to be a simple func that return subset struct from its name, it should not contains complex subset management logic

subset.Status.UnschedulableStatus.PendingPods++
}
if checkAfter > 0 {
durationStore.Push(unitedDeploymentKey, checkAfter)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it better to only enqueue the earliest to be timeout pod ?

@@ -79,6 +80,17 @@ func (r *ReconcileUnitedDeployment) manageSubsets(ud *appsv1alpha1.UnitedDeploym
if updateErr == nil {
SetUnitedDeploymentCondition(newStatus, NewUnitedDeploymentCondition(appsv1alpha1.SubsetUpdated, corev1.ConditionTrue, "", ""))
} else {
// If using an Adaptive scheduling strategy, when the subset is scaled out leading to the creation of new Pods,
// future potential scheduling failures need to be checked for rescheduling.
var newPodCreated = false
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it really necessary to trigger the requeue here given that manageUnschedulableStatusForExistingSubset had already push some duration time.

@@ -132,6 +144,11 @@ func (r *ReconcileUnitedDeployment) manageSubsetProvision(ud *appsv1alpha1.Unite
return nil
})
if createdErr == nil {
// When a new subset is created, regardless of whether it contains newly created Pods,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it really necessary to trigger the requeue here given that manageUnschedulableStatusForExistingSubset had already push some duration time.

@@ -348,10 +449,14 @@ func (r *ReconcileUnitedDeployment) classifySubsetBySubsetName(ud *appsv1alpha1.
return mapping
}

func (r *ReconcileUnitedDeployment) updateStatus(instance *appsv1alpha1.UnitedDeployment, newStatus, oldStatus *appsv1alpha1.UnitedDeploymentStatus, nameToSubset *map[string]*Subset, nextReplicas, nextPartition *map[string]int32, currentRevision, updatedRevision *appsv1.ControllerRevision, collisionCount int32, control ControlInterface) (reconcile.Result, error) {
func (r *ReconcileUnitedDeployment) updateStatus(instance *appsv1alpha1.UnitedDeployment, newStatus, oldStatus *appsv1alpha1.UnitedDeploymentStatus, nameToSubset *map[string]*Subset, nextReplicas, nextPartition *map[string]int32, currentRevision, updatedRevision *appsv1.ControllerRevision, collisionCount int32, control ControlInterface) error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is too many func params in the updateStatus, consider move the calculateStatus to Reconcile, and make updateStatus accept only the instance, new and old status.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[feature request] UnitedDeployment support reschedule pod to other subset if current subset lacks resources
4 participants