-
Notifications
You must be signed in to change notification settings - Fork 952
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can deserved resource be lent to jobs in another queue? #3677
Comments
It can be lent to other queue automaticly when guranteed less than deserved. |
There is no overused check but only preemptive in capacity plugin, so other queue can get resouces allocated when there are idle resources, that's to say, queue can allocate resources when cluster have idle resources, and it wlill be reclaimed when other queue has job submitted, and until meet the deserved value. |
what makes me feel troubled is the different between capacity and roportion. In my test case scheduler-conf: When I create a Spark-Appliction which flight-low's queue resource is not enough.My expectation is that Spark-Appliction‘s pod will pending until use capacity plugin . but now. its also running when i didn't use capacity plugin. Or I miscalculated the cluster capacity? I used the --node-selector to limit the scheduler. How does the scheduler determine cluster resources In the case? The nodes selected by node-selector, or all |
Only nodes match --node-selector specified will be computed, can you provide more info about the cluster reousources, job & queue yaml? |
my cluster reousources is such like. [root@k8s-177-012-005 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
ylds-160-90-100.linux.17usoft.com Ready <none> 28d v1.28.8
ylds-160-90-71.linux.17usoft.com Ready,SchedulingDisabled control-plane 199d v1.28.8
ylds-160-90-72.linux.17usoft.com Ready,SchedulingDisabled control-plane 199d v1.28.8
ylds-160-90-73.linux.17usoft.com Ready,SchedulingDisabled control-plane 199d v1.28.8
ylds-160-90-82.linux.17usoft.com Ready <none> 24d v1.28.8
ylds-160-90-83.linux.17usoft.com Ready,SchedulingDisabled <none> 198d v1.28.8
ylds-160-90-84.linux.17usoft.com Ready,SchedulingDisabled <none> 198d v1.28.8
ylds-160-90-85.linux.17usoft.com Ready <none> 198d v1.28.8
ylds-160-90-95.linux.17usoft.com Ready,SchedulingDisabled <none> 198d v1.28.8
ylds-160-90-96.linux.17usoft.com Ready <none> 198d v1.28.8
ylds-160-90-97.linux.17usoft.com Ready <none> 47h v1.28.8
[root@k8s-177-012-005 ~]# kubectl get node -l queue-test=true
NAME STATUS ROLES AGE VERSION
ylds-160-90-82.linux.17usoft.com Ready <none> 24d v1.28.8
You have mail in /var/spool/mail/root
[root@k8s-177-012-005 ~]# kubectl describe node ylds-160-90-82.linux.17usoft.com
Name: ylds-160-90-82.linux.17usoft.com
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
env=qa
idcNameAbbr=YLDS
kubernetes.io/arch=amd64
kubernetes.io/hostname=ylds-160-90-82.linux.17usoft.com
kubernetes.io/os=linux
logicIdcArea=cn_north
logicIdcUk=officeidc_hb1
node=real
node_classify=work
nodelocal=true
owner=inf
queue-test=true
rack=leaf64906
spark=test
Annotations: CpuOversoldFactor: 10
csi.volume.kubernetes.io/nodeid: {"csi.juicefs.com":"ylds-160-90-82.linux.17usoft.com"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/ASNumber: 64906
projectcalico.org/IPv4Address: 10.160.90.82/26
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 22 Jul 2024 20:13:46 +0800
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ylds-160-90-82.linux.17usoft.com
AcquireTime: <unset>
RenewTime: Fri, 16 Aug 2024 16:44:42 +0800
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Wed, 31 Jul 2024 19:55:50 +0800 Wed, 31 Jul 2024 19:55:50 +0800 CalicoIsUp Calico is running on this node
MemoryPressure False Fri, 16 Aug 2024 16:44:37 +0800 Wed, 31 Jul 2024 15:13:22 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 16 Aug 2024 16:44:37 +0800 Wed, 31 Jul 2024 15:13:22 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 16 Aug 2024 16:44:37 +0800 Wed, 31 Jul 2024 15:13:22 +0800 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 16 Aug 2024 16:44:37 +0800 Wed, 31 Jul 2024 15:13:22 +0800 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.160.90.82
Hostname: ylds-160-90-82.linux.17usoft.com
Capacity:
cpu: 28
ephemeral-storage: 4573960Mi
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 263213692Ki
pods: 110
tche.aos.io/batch-cpu: 0
tche.aos.io/batch-memory: 114751593909
tche.aos.io/data-storage: 3124111108Ki
Allocatable:
cpu: 280
ephemeral-storage: 4197704750260
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 229556860Ki
pods: 110
tche.aos.io/batch-cpu: 0
tche.aos.io/batch-memory: 114751593909
tche.aos.io/data-storage: 3124111108Ki
my job is such like this. it relies on spark-operator. apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: spark-pi-d
namespace: spark-operator
spec:
type: Scala
mode: cluster
image: "docker.io/library/spark:3.4.2"
imagePullPolicy: Always
mainClass: org.apache.spark.examples.SparkPi
nodeSelector:
kubernetes.io/hostname: "ylds-160-90-82.linux.17usoft.com"
arguments:
- "214400"
mainApplicationFile: "local:///opt/spark/examples/jars/spark-examples_2.12-3.4.2.jar"
sparkVersion: "3.4.2"
batchScheduler: "volcano-spark" #Note: the batch scheduler name must be specified with `volcano`
restartPolicy:
type: Never
volumes:
- name: "test-volume"
hostPath:
path: "/tmp"
type: Directory
batchSchedulerOptions:
queue: "flight-low"
driver:
cores: 10
memory: "5120m"
labels:
version: 3.4.2
serviceAccount: spark-service-account
volumeMounts:
- name: "test-volume"
mountPath: "/tmp"
executor:
cores: 10
instances: 1
memory: "51200m"
labels:
version: 3.4.2
volumeMounts:
- name: "test-volume"
mountPath: "/tmp" my all queue is like this: apiVersion: v1
items:
- apiVersion: scheduling.volcano.sh/v1beta1
kind: Queue
metadata:
creationTimestamp: "2024-07-24T09:13:53Z"
generation: 1
managedFields:
- apiVersion: scheduling.volcano.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:spec:
.: {}
f:guarantee: {}
f:reclaimable: {}
f:weight: {}
manager: vc-scheduler
operation: Update
time: "2024-07-24T09:13:53Z"
- apiVersion: scheduling.volcano.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:allocated:
f:attachable-volumes-csi-csi.juicefs.com: {}
f:cpu: {}
f:memory: {}
f:pods: {}
manager: vc-scheduler
operation: Update
subresource: status
time: "2024-08-16T07:34:22Z"
- apiVersion: scheduling.volcano.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:status:
.: {}
f:allocated: {}
f:inqueue: {}
f:reservation: {}
f:running: {}
f:state: {}
manager: vc-controller-manager
operation: Update
subresource: status
time: "2024-08-16T07:34:23Z"
name: default
resourceVersion: "124274808"
uid: a4438998-0988-461a-be5e-edde21c8c3d1
spec:
guarantee: {}
reclaimable: true
weight: 1
status:
allocated:
attachable-volumes-csi-csi.juicefs.com: 1m
cpu: 7022m
memory: 59806Mi
pods: "26"
inqueue: 6
reservation: {}
running: 23
state: Open
- apiVersion: scheduling.volcano.sh/v1beta1
kind: Queue
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"scheduling.volcano.sh/v1beta1","kind":"Queue","metadata":{"annotations":{},"name":"demo-queue"},"spec":{"capability":{"cpu":"16","memory":"8Gi"},"reclaimable":true,"weight":1},"status":{"state":"Open"}}
creationTimestamp: "2024-07-26T03:04:38Z"
generation: 1
managedFields:
- apiVersion: scheduling.volcano.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
.: {}
f:capability:
.: {}
f:cpu: {}
f:memory: {}
f:reclaimable: {}
f:weight: {}
manager: kubectl-client-side-apply
operation: Update
time: "2024-07-26T03:04:38Z"
- apiVersion: scheduling.volcano.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:status:
.: {}
f:allocated: {}
f:reservation: {}
f:state: {}
manager: vc-controller-manager
operation: Update
subresource: status
time: "2024-07-26T03:04:38Z"
- apiVersion: scheduling.volcano.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:allocated:
f:cpu: {}
f:memory: {}
manager: vc-scheduler
operation: Update
subresource: status
time: "2024-07-26T03:04:39Z"
name: demo-queue
resourceVersion: "101062235"
uid: abb5dd90-03f9-4cd7-bcd3-e7d44cbd224b
spec:
capability:
cpu: "16"
memory: 8Gi
reclaimable: true
weight: 1
status:
allocated:
cpu: "0"
memory: "0"
reservation: {}
state: Open
- apiVersion: scheduling.volcano.sh/v1beta1
kind: Queue
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"scheduling.volcano.sh/v1beta1","kind":"Queue","metadata":{"annotations":{},"name":"flight-high"},"spec":{"reclaimable":false,"weight":9}}
creationTimestamp: "2024-08-13T09:44:11Z"
generation: 3
managedFields:
- apiVersion: scheduling.volcano.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:allocated:
f:cpu: {}
f:memory: {}
manager: vc-scheduler
operation: Update
subresource: status
time: "2024-08-15T08:36:46Z"
- apiVersion: scheduling.volcano.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:status:
.: {}
f:allocated: {}
f:inqueue: {}
f:reservation: {}
f:state: {}
manager: vc-controller-manager
operation: Update
subresource: status
time: "2024-08-15T12:09:22Z"
- apiVersion: scheduling.volcano.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
.: {}
f:reclaimable: {}
f:weight: {}
manager: kubectl
operation: Update
time: "2024-08-16T07:20:59Z"
name: flight-high
resourceVersion: "124264146"
uid: 6e52bd17-ef04-477d-99cc-0b22b1872a67
spec:
reclaimable: false
weight: 9
status:
allocated:
cpu: "0"
memory: "0"
inqueue: 1
reservation: {}
state: Open
- apiVersion: scheduling.volcano.sh/v1beta1
kind: Queue
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"scheduling.volcano.sh/v1beta1","kind":"Queue","metadata":{"annotations":{},"name":"flight-low"},"spec":{"reclaimable":false,"weight":1}}
creationTimestamp: "2024-08-13T09:44:04Z"
generation: 4
managedFields:
- apiVersion: scheduling.volcano.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
.: {}
f:reclaimable: {}
f:weight: {}
manager: kubectl
operation: Update
time: "2024-08-16T07:20:52Z"
- apiVersion: scheduling.volcano.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:allocated:
f:cpu: {}
f:memory: {}
manager: vc-scheduler
operation: Update
subresource: status
time: "2024-08-16T08:47:21Z"
- apiVersion: scheduling.volcano.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:status:
.: {}
f:allocated: {}
f:reservation: {}
f:running: {}
f:state: {}
manager: vc-controller-manager
operation: Update
subresource: status
time: "2024-08-16T08:57:10Z"
name: flight-low
resourceVersion: "124335430"
uid: 004e9b89-912f-47e8-a55d-8f07c93cf9c1
spec:
reclaimable: false
weight: 1
status:
allocated:
cpu: "0"
memory: "0"
reservation: {}
running: 1
state: Open
kind: List
metadata:
resourceVersion: ""
selfLink: ""
|
my cluster reousources is such like. [root@k8s-177-012-005 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
ylds-160-90-100.linux.17usoft.com Ready <none> 28d v1.28.8
ylds-160-90-71.linux.17usoft.com Ready,SchedulingDisabled control-plane 199d v1.28.8
ylds-160-90-72.linux.17usoft.com Ready,SchedulingDisabled control-plane 199d v1.28.8
ylds-160-90-73.linux.17usoft.com Ready,SchedulingDisabled control-plane 199d v1.28.8
ylds-160-90-82.linux.17usoft.com Ready <none> 24d v1.28.8
ylds-160-90-83.linux.17usoft.com Ready,SchedulingDisabled <none> 198d v1.28.8
ylds-160-90-84.linux.17usoft.com Ready,SchedulingDisabled <none> 198d v1.28.8
ylds-160-90-85.linux.17usoft.com Ready <none> 198d v1.28.8
ylds-160-90-95.linux.17usoft.com Ready,SchedulingDisabled <none> 198d v1.28.8
ylds-160-90-96.linux.17usoft.com Ready <none> 198d v1.28.8
ylds-160-90-97.linux.17usoft.com Ready <none> 47h v1.28.8
[root@k8s-177-012-005 ~]# kubectl get node -l queue-test=true
NAME STATUS ROLES AGE VERSION
ylds-160-90-82.linux.17usoft.com Ready <none> 24d v1.28.8
You have mail in /var/spool/mail/root
[root@k8s-177-012-005 ~]# kubectl describe node ylds-160-90-82.linux.17usoft.com
Name: ylds-160-90-82.linux.17usoft.com
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
env=qa
idcNameAbbr=YLDS
kubernetes.io/arch=amd64
kubernetes.io/hostname=ylds-160-90-82.linux.17usoft.com
kubernetes.io/os=linux
logicIdcArea=cn_north
logicIdcUk=officeidc_hb1
node=real
node_classify=work
nodelocal=true
owner=inf
queue-test=true
rack=leaf64906
spark=test
Annotations: CpuOversoldFactor: 10
csi.volume.kubernetes.io/nodeid: {"csi.juicefs.com":"ylds-160-90-82.linux.17usoft.com"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/ASNumber: 64906
projectcalico.org/IPv4Address: 10.160.90.82/26
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 22 Jul 2024 20:13:46 +0800
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ylds-160-90-82.linux.17usoft.com
AcquireTime: <unset>
RenewTime: Fri, 16 Aug 2024 16:44:42 +0800
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Wed, 31 Jul 2024 19:55:50 +0800 Wed, 31 Jul 2024 19:55:50 +0800 CalicoIsUp Calico is running on this node
MemoryPressure False Fri, 16 Aug 2024 16:44:37 +0800 Wed, 31 Jul 2024 15:13:22 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 16 Aug 2024 16:44:37 +0800 Wed, 31 Jul 2024 15:13:22 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 16 Aug 2024 16:44:37 +0800 Wed, 31 Jul 2024 15:13:22 +0800 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 16 Aug 2024 16:44:37 +0800 Wed, 31 Jul 2024 15:13:22 +0800 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.160.90.82
Hostname: ylds-160-90-82.linux.17usoft.com
Capacity:
cpu: 28
ephemeral-storage: 4573960Mi
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 263213692Ki
pods: 110
tche.aos.io/batch-cpu: 0
tche.aos.io/batch-memory: 114751593909
tche.aos.io/data-storage: 3124111108Ki
Allocatable:
cpu: 280
ephemeral-storage: 4197704750260
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 229556860Ki
pods: 110
tche.aos.io/batch-cpu: 0
tche.aos.io/batch-memory: 114751593909
tche.aos.io/data-storage: 3124111108Ki
my job is such like this. it relies on spark-operator. apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: spark-pi-d
namespace: spark-operator
spec:
type: Scala
mode: cluster
image: "docker.io/library/spark:3.4.2"
imagePullPolicy: Always
mainClass: org.apache.spark.examples.SparkPi
nodeSelector:
kubernetes.io/hostname: "ylds-160-90-82.linux.17usoft.com"
arguments:
- "214400"
mainApplicationFile: "local:///opt/spark/examples/jars/spark-examples_2.12-3.4.2.jar"
sparkVersion: "3.4.2"
batchScheduler: "volcano-spark" #Note: the batch scheduler name must be specified with `volcano`
restartPolicy:
type: Never
volumes:
- name: "test-volume"
hostPath:
path: "/tmp"
type: Directory
batchSchedulerOptions:
queue: "flight-low"
driver:
cores: 10
memory: "5120m"
labels:
version: 3.4.2
serviceAccount: spark-service-account
volumeMounts:
- name: "test-volume"
mountPath: "/tmp"
executor:
cores: 10
instances: 1
memory: "51200m"
labels:
version: 3.4.2
volumeMounts:
- name: "test-volume"
mountPath: "/tmp" my all queue is like this: apiVersion: v1
items:
- apiVersion: scheduling.volcano.sh/v1beta1
kind: Queue
metadata:
creationTimestamp: "2024-07-24T09:13:53Z"
generation: 1
managedFields:
- apiVersion: scheduling.volcano.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:spec:
.: {}
f:guarantee: {}
f:reclaimable: {}
f:weight: {}
manager: vc-scheduler
operation: Update
time: "2024-07-24T09:13:53Z"
- apiVersion: scheduling.volcano.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:allocated:
f:attachable-volumes-csi-csi.juicefs.com: {}
f:cpu: {}
f:memory: {}
f:pods: {}
manager: vc-scheduler
operation: Update
subresource: status
time: "2024-08-16T07:34:22Z"
- apiVersion: scheduling.volcano.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:status:
.: {}
f:allocated: {}
f:inqueue: {}
f:reservation: {}
f:running: {}
f:state: {}
manager: vc-controller-manager
operation: Update
subresource: status
time: "2024-08-16T07:34:23Z"
name: default
resourceVersion: "124274808"
uid: a4438998-0988-461a-be5e-edde21c8c3d1
spec:
guarantee: {}
reclaimable: true
weight: 1
status:
allocated:
attachable-volumes-csi-csi.juicefs.com: 1m
cpu: 7022m
memory: 59806Mi
pods: "26"
inqueue: 6
reservation: {}
running: 23
state: Open
- apiVersion: scheduling.volcano.sh/v1beta1
kind: Queue
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"scheduling.volcano.sh/v1beta1","kind":"Queue","metadata":{"annotations":{},"name":"demo-queue"},"spec":{"capability":{"cpu":"16","memory":"8Gi"},"reclaimable":true,"weight":1},"status":{"state":"Open"}}
creationTimestamp: "2024-07-26T03:04:38Z"
generation: 1
managedFields:
- apiVersion: scheduling.volcano.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
.: {}
f:capability:
.: {}
f:cpu: {}
f:memory: {}
f:reclaimable: {}
f:weight: {}
manager: kubectl-client-side-apply
operation: Update
time: "2024-07-26T03:04:38Z"
- apiVersion: scheduling.volcano.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:status:
.: {}
f:allocated: {}
f:reservation: {}
f:state: {}
manager: vc-controller-manager
operation: Update
subresource: status
time: "2024-07-26T03:04:38Z"
- apiVersion: scheduling.volcano.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:allocated:
f:cpu: {}
f:memory: {}
manager: vc-scheduler
operation: Update
subresource: status
time: "2024-07-26T03:04:39Z"
name: demo-queue
resourceVersion: "101062235"
uid: abb5dd90-03f9-4cd7-bcd3-e7d44cbd224b
spec:
capability:
cpu: "16"
memory: 8Gi
reclaimable: true
weight: 1
status:
allocated:
cpu: "0"
memory: "0"
reservation: {}
state: Open
- apiVersion: scheduling.volcano.sh/v1beta1
kind: Queue
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"scheduling.volcano.sh/v1beta1","kind":"Queue","metadata":{"annotations":{},"name":"flight-high"},"spec":{"reclaimable":false,"weight":9}}
creationTimestamp: "2024-08-13T09:44:11Z"
generation: 3
managedFields:
- apiVersion: scheduling.volcano.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:allocated:
f:cpu: {}
f:memory: {}
manager: vc-scheduler
operation: Update
subresource: status
time: "2024-08-15T08:36:46Z"
- apiVersion: scheduling.volcano.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:status:
.: {}
f:allocated: {}
f:inqueue: {}
f:reservation: {}
f:state: {}
manager: vc-controller-manager
operation: Update
subresource: status
time: "2024-08-15T12:09:22Z"
- apiVersion: scheduling.volcano.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
.: {}
f:reclaimable: {}
f:weight: {}
manager: kubectl
operation: Update
time: "2024-08-16T07:20:59Z"
name: flight-high
resourceVersion: "124264146"
uid: 6e52bd17-ef04-477d-99cc-0b22b1872a67
spec:
reclaimable: false
weight: 9
status:
allocated:
cpu: "0"
memory: "0"
inqueue: 1
reservation: {}
state: Open
- apiVersion: scheduling.volcano.sh/v1beta1
kind: Queue
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"scheduling.volcano.sh/v1beta1","kind":"Queue","metadata":{"annotations":{},"name":"flight-low"},"spec":{"reclaimable":false,"weight":1}}
creationTimestamp: "2024-08-13T09:44:04Z"
generation: 4
managedFields:
- apiVersion: scheduling.volcano.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
.: {}
f:reclaimable: {}
f:weight: {}
manager: kubectl
operation: Update
time: "2024-08-16T07:20:52Z"
- apiVersion: scheduling.volcano.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:allocated:
f:cpu: {}
f:memory: {}
manager: vc-scheduler
operation: Update
subresource: status
time: "2024-08-16T08:47:21Z"
- apiVersion: scheduling.volcano.sh/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:status:
.: {}
f:allocated: {}
f:reservation: {}
f:running: {}
f:state: {}
manager: vc-controller-manager
operation: Update
subresource: status
time: "2024-08-16T08:57:10Z"
name: flight-low
resourceVersion: "124335430"
uid: 004e9b89-912f-47e8-a55d-8f07c93cf9c1
spec:
reclaimable: false
weight: 1
status:
allocated:
cpu: "0"
memory: "0"
reservation: {}
running: 1
state: Open
kind: List
metadata:
resourceVersion: ""
selfLink: ""
|
Could you answer the question? |
Please describe your problem in detail
https://github.com/volcano-sh/volcano/blob/master/docs/design/capacity-scheduling.md
In the Story 2. article says "Administrator can create two queues with deserved capacity configured and the deserved resource can be lent to jobs in another queue.". However,i didn't find the actual code logic in Volcano project.
Could you tell me that specific code for the feature.
Any other relevant information
No response
The text was updated successfully, but these errors were encountered: