You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue has been automatically marked as stale because it has not had recent activity.
This bot triages issues and PRs according to the following rules:
After 90d of inactivity, lifecycle/stale is applied
After 30d of inactivity since lifecycle/stale was applied, the issue is closed
You can:
Mark this issue or PR as fresh with /remove-lifecycle stale
Close this issue or PR with /close
Thank you for your contributions.
What happened:
在namespace huang中设置限额为request和limit都为1核1Gi,然后创建3个资源预留对象,每个资源预留对象request和limit都为500m
和800Mi,申请出来3个reservation对象都是RUNNING,结果符合预期吗?
问题1、reservation对象变成avaliable就是占住了资源吗?
问题2、资源预留对象和namespace配额有无关系?
------------------------------------quota.yaml--------------------------------------------------------------
apiVersion: v1
kind: ResourceQuota
metadata:
name: huang-quota
namespace: huang
spec:
hard:
requests.cpu: "1"
requests.memory: "1Gi"
limits.cpu: "1"
limits.memory: "1Gi"
--------------------------------------------reservation-demo.yaml----------------------------
apiVersion: scheduling.koordinator.sh/v1alpha1
kind: Reservation
metadata:
name: reservation-huang
spec:
allocateOnce: false
template: # set resource requirements
namespace: huang
spec:
containers:
- name: stress
image: polinux/stress
imagePullPolicy: IfNotPresent
resources: # reserve 500m cpu and 800Mi memory
requests:
cpu: 500m
memory: 800Mi
limits:
cpu: 500m
memory: 800Mi
schedulerName: koord-scheduler # use koord-scheduler
owners: # set the owner specifications
- object: # owner pods whose name is
default/pod-demo-0
name: pod-huang-0
namespace: huang
- object: # owner pods whose name is
default/pod-demo-0
name: pod-huang-1
namespace: huang
ttl: 1h # set the TTL, the reservation will get expired 1 hour late
--------------------------------------------reservation-demo2.yaml----------------------------------------
apiVersion: scheduling.koordinator.sh/v1alpha1
kind: Reservation
metadata:
name: reservation-huang
spec:
allocateOnce: false
template: # set resource requirements
namespace: huang
spec:
containers:
- name: stress
image: polinux/stress
imagePullPolicy: IfNotPresent
resources: # reserve 500m cpu and 800Mi memory
requests:
cpu: 500m
memory: 800Mi
limits:
cpu: 500m
memory: 800Mi
schedulerName: koord-scheduler # use koord-scheduler
owners: # set the owner specifications
- object: # owner pods whose name is
default/pod-demo-0
name: pod-huang-0
namespace: huang
- object: # owner pods whose name is
default/pod-demo-0
name: pod-huang-1
namespace: huang
ttl: 1h # set the TTL, the reservation will get expired 1 hour late
[root@kde-offline3 huang]# ls
quota.yaml reservation-demo2.yaml reservation-demo3.yaml reservation-demo.yaml
[root@kde-offline3 huang]# cat reservation-demo2.yaml
apiVersion: scheduling.koordinator.sh/v1alpha1
kind: Reservation
metadata:
name: reservation-huang2
spec:
allocateOnce: false
template: # set resource requirements
namespace: huang
spec:
containers:
- name: stress
image: polinux/stress
imagePullPolicy: IfNotPresent
resources: # reserve 500m cpu and 800Mi memory
requests:
cpu: 500m
memory: 800Mi
limits:
cpu: 500m
memory: 800Mi
schedulerName: koord-scheduler # use koord-scheduler
owners: # set the owner specifications
- object: # owner pods whose name is
default/pod-demo-0
name: pod-huang2-0
namespace: huang
- object: # owner pods whose name is
default/pod-demo-0
name: pod-huang2-1
namespace: huang
ttl: 1h # set the TTL, the reservation will get expired 1 hour later
-------------------------------------------------reservation-demo3.yaml----------------------------------------------------
apiVersion: scheduling.koordinator.sh/v1alpha1
kind: Reservation
metadata:
name: reservation-huang2
spec:
allocateOnce: false
template: # set resource requirements
namespace: huang
spec:
containers:
- name: stress
image: polinux/stress
imagePullPolicy: IfNotPresent
resources: # reserve 500m cpu and 800Mi memory
requests:
cpu: 500m
memory: 800Mi
limits:
cpu: 500m
memory: 800Mi
schedulerName: koord-scheduler # use koord-scheduler
owners: # set the owner specifications
- object: # owner pods whose name is
default/pod-demo-0
name: pod-huang2-0
namespace: huang
- object: # owner pods whose name is
default/pod-demo-0
name: pod-huang2-1
namespace: huang
ttl: 1h # set the TTL, the reservation will get expired 1 hour later
[root@kde-offline3 huang]# ls
quota.yaml reservation-demo2.yaml reservation-demo3.yaml reservation-demo.yaml
[root@kde-offline3 huang]# cat reservation-demo3.yaml
apiVersion: scheduling.koordinator.sh/v1alpha1
kind: Reservation
metadata:
name: reservation-huang3
spec:
allocateOnce: false
template: # set resource requirements
namespace: huang
spec:
containers:
- name: stress
image: polinux/stress
imagePullPolicy: IfNotPresent
resources: # reserve 500m cpu and 800Mi memory
requests:
cpu: 500m
memory: 800Mi
limits:
cpu: 500m
memory: 800Mi
schedulerName: koord-scheduler # use koord-scheduler
owners: # set the owner specifications
- object: # owner pods whose name is
default/pod-demo-0
name: pod-huang3-0
namespace: huang
- object: # owner pods whose name is
default/pod-demo-0
name: pod-huang3-1
namespace: huang
ttl: 1h # set the TTL, the reservation will get expired 1 hour later
What you expected to happen:
Environment:
Anything else we need to know:
The text was updated successfully, but these errors were encountered: