We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What happened:
The parent queue has over the maximum
The child queue len resource can't recycle
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
create ns kubectl create ns namespace1 kubectl create ns namespace2
kubectl create ns namespace1 kubectl create ns namespace2
create queue `apiVersion: scheduling.sigs.k8s.io/v1alpha1 kind: ElasticQuota metadata: name: root labels: quota.scheduling.koordinator.sh/is-parent: "true" quota.scheduling.koordinator.sh/allow-lent-resource: "false" spec: max: cpu: 2 memory: 2Gi min: cpu: 2 memory: 2Gi
kind: ElasticQuota metadata: name: a namespace: namespace1 labels: quota.scheduling.koordinator.sh/parent: "root" quota.scheduling.koordinator.sh/is-parent: "false" quota.scheduling.koordinator.sh/allow-lent-resource: "true" annotations: quota.scheduling.koordinator.sh/shared-weight: '{"cpu":"1","memory":"1Gi"}' spec: max: cpu: 2 memory: 2Gi min: cpu: 1 memory: 1Gi
apiVersion: scheduling.sigs.k8s.io/v1alpha1 kind: ElasticQuota metadata: name: b namespace: namespace2 labels: quota.scheduling.koordinator.sh/parent: "root" quota.scheduling.koordinator.sh/is-parent: "false" quota.scheduling.koordinator.sh/allow-lent-resource: "true" annotations: quota.scheduling.koordinator.sh/shared-weight: '{"cpu":"1","memory":"1Gi"}' spec: max: cpu: 2 memory: 2Gi min: cpu: 1 memory: 1Gi `
`apiVersion: v1 kind: Pod metadata: name: pod-a-1 namespace: namespace1 labels: quota.scheduling.koordinator.sh/name: "a" koordinator.sh/qosClass: BE spec: schedulerName: koord-scheduler priorityClassName: koord-batch containers: - command: - sleep - 365d image: nginx imagePullPolicy: IfNotPresent name: curlimage resources: limits: cpu: 1 memory: 1Gi requests: cpu: 1 memory: 1Gi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File restartPolicy: Always
apiVersion: v1 kind: Pod metadata: name: pod-a-2 namespace: namespace1 labels: quota.scheduling.koordinator.sh/name: "a" koordinator.sh/qosClass: BE spec: schedulerName: koord-scheduler priorityClassName: koord-batch containers: - command: - sleep - 365d image: nginx imagePullPolicy: IfNotPresent name: curlimage resources: limits: cpu: 1 memory: 1Gi requests: cpu: 1 memory: 1Gi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File restartPolicy: Always`
apiVersion: v1 kind: Pod metadata: name: pod-b-2 namespace: namespace2 labels: quota.scheduling.koordinator.sh/name: "b" koordinator.sh/qosClass: LS spec: priorityClassName: koord-prod schedulerName: koord-scheduler containers: - command: - sleep - 365d image: nginx imagePullPolicy: IfNotPresent name: curlimage resources: limits: cpu: 1 memory: 1Gi requests: cpu: 1 memory: 1Gi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File restartPolicy: Always`
Anything else we need to know?:
Environment:
kubectl version
The text was updated successfully, but these errors were encountered:
@hiwangzhihui To recursively check the parent tree, please set enableCheckParentQuota to true in the pluginArgs.
enableCheckParentQuota
true
Sorry, something went wrong.
No branches or pull requests
What happened:
The parent queue has over the maximum
The child queue len resource can't recycle
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
create ns
kubectl create ns namespace1 kubectl create ns namespace2
create queue
`apiVersion: scheduling.sigs.k8s.io/v1alpha1
kind: ElasticQuota
metadata:
name: root
labels:
quota.scheduling.koordinator.sh/is-parent: "true"
quota.scheduling.koordinator.sh/allow-lent-resource: "false"
spec:
max:
cpu: 2
memory: 2Gi
min:
cpu: 2
memory: 2Gi
kind: ElasticQuota
metadata:
name: a
namespace: namespace1
labels:
quota.scheduling.koordinator.sh/parent: "root"
quota.scheduling.koordinator.sh/is-parent: "false"
quota.scheduling.koordinator.sh/allow-lent-resource: "true"
annotations:
quota.scheduling.koordinator.sh/shared-weight: '{"cpu":"1","memory":"1Gi"}'
spec:
max:
cpu: 2
memory: 2Gi
min:
cpu: 1
memory: 1Gi
apiVersion: scheduling.sigs.k8s.io/v1alpha1
kind: ElasticQuota
metadata:
name: b
namespace: namespace2
labels:
quota.scheduling.koordinator.sh/parent: "root"
quota.scheduling.koordinator.sh/is-parent: "false"
quota.scheduling.koordinator.sh/allow-lent-resource: "true"
annotations:
quota.scheduling.koordinator.sh/shared-weight: '{"cpu":"1","memory":"1Gi"}'
spec:
max:
cpu: 2
memory: 2Gi
min:
cpu: 1
memory: 1Gi
`
`apiVersion: v1
kind: Pod
metadata:
name: pod-a-1
namespace: namespace1
labels:
quota.scheduling.koordinator.sh/name: "a"
koordinator.sh/qosClass: BE
spec:
schedulerName: koord-scheduler
priorityClassName: koord-batch
containers:
- command:
- sleep
- 365d
image: nginx
imagePullPolicy: IfNotPresent
name: curlimage
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
restartPolicy: Always
apiVersion: v1
kind: Pod
metadata:
name: pod-a-2
namespace: namespace1
labels:
quota.scheduling.koordinator.sh/name: "a"
koordinator.sh/qosClass: BE
spec:
schedulerName: koord-scheduler
priorityClassName: koord-batch
containers:
- command:
- sleep
- 365d
image: nginx
imagePullPolicy: IfNotPresent
name: curlimage
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
restartPolicy: Always`
`apiVersion: v1
kind: Pod
metadata:
name: pod-b-1
namespace: namespace2
labels:
quota.scheduling.koordinator.sh/name: "b"
koordinator.sh/qosClass: LS
spec:
priorityClassName: koord-prod
schedulerName: koord-scheduler
containers:
image: nginx
imagePullPolicy: IfNotPresent
name: curlimage
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
restartPolicy: Always
apiVersion: v1
kind: Pod
metadata:
name: pod-b-2
namespace: namespace2
labels:
quota.scheduling.koordinator.sh/name: "b"
koordinator.sh/qosClass: LS
spec:
priorityClassName: koord-prod
schedulerName: koord-scheduler
containers:
- command:
- sleep
- 365d
image: nginx
imagePullPolicy: IfNotPresent
name: curlimage
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
restartPolicy: Always`
Anything else we need to know?:
Environment:
kubectl version
): v1.21.0The text was updated successfully, but these errors were encountered: