Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[question]reservation对象和namespace配额疑问 #2108

Closed
zj619 opened this issue Jun 18, 2024 · 3 comments
Closed

[question]reservation对象和namespace配额疑问 #2108

zj619 opened this issue Jun 18, 2024 · 3 comments
Labels
kind/question Support request or question relating to Koordinator lifecycle/stale

Comments

@zj619
Copy link

zj619 commented Jun 18, 2024

What happened:
在namespace huang中设置限额为request和limit都为1核1Gi,然后创建3个资源预留对象,每个资源预留对象request和limit都为500m
和800Mi,申请出来3个reservation对象都是RUNNING,结果符合预期吗?
问题1、reservation对象变成avaliable就是占住了资源吗?
问题2、资源预留对象和namespace配额有无关系?

image

------------------------------------quota.yaml--------------------------------------------------------------
apiVersion: v1
kind: ResourceQuota
metadata:
name: huang-quota
namespace: huang
spec:
hard:
requests.cpu: "1"
requests.memory: "1Gi"
limits.cpu: "1"
limits.memory: "1Gi"

--------------------------------------------reservation-demo.yaml----------------------------
apiVersion: scheduling.koordinator.sh/v1alpha1
kind: Reservation
metadata:
name: reservation-huang
spec:
allocateOnce: false
template: # set resource requirements
namespace: huang
spec:
containers:
- name: stress
image: polinux/stress
imagePullPolicy: IfNotPresent
resources: # reserve 500m cpu and 800Mi memory
requests:
cpu: 500m
memory: 800Mi
limits:
cpu: 500m
memory: 800Mi
schedulerName: koord-scheduler # use koord-scheduler
owners: # set the owner specifications
- object: # owner pods whose name is default/pod-demo-0
name: pod-huang-0
namespace: huang
- object: # owner pods whose name is default/pod-demo-0
name: pod-huang-1
namespace: huang
ttl: 1h # set the TTL, the reservation will get expired 1 hour late

--------------------------------------------reservation-demo2.yaml----------------------------------------
apiVersion: scheduling.koordinator.sh/v1alpha1
kind: Reservation
metadata:
name: reservation-huang
spec:
allocateOnce: false
template: # set resource requirements
namespace: huang
spec:
containers:
- name: stress
image: polinux/stress
imagePullPolicy: IfNotPresent
resources: # reserve 500m cpu and 800Mi memory
requests:
cpu: 500m
memory: 800Mi
limits:
cpu: 500m
memory: 800Mi
schedulerName: koord-scheduler # use koord-scheduler
owners: # set the owner specifications
- object: # owner pods whose name is default/pod-demo-0
name: pod-huang-0
namespace: huang
- object: # owner pods whose name is default/pod-demo-0
name: pod-huang-1
namespace: huang
ttl: 1h # set the TTL, the reservation will get expired 1 hour late
[root@kde-offline3 huang]# ls
quota.yaml reservation-demo2.yaml reservation-demo3.yaml reservation-demo.yaml
[root@kde-offline3 huang]# cat reservation-demo2.yaml
apiVersion: scheduling.koordinator.sh/v1alpha1
kind: Reservation
metadata:
name: reservation-huang2
spec:
allocateOnce: false
template: # set resource requirements
namespace: huang
spec:
containers:
- name: stress
image: polinux/stress
imagePullPolicy: IfNotPresent
resources: # reserve 500m cpu and 800Mi memory
requests:
cpu: 500m
memory: 800Mi
limits:
cpu: 500m
memory: 800Mi
schedulerName: koord-scheduler # use koord-scheduler
owners: # set the owner specifications
- object: # owner pods whose name is default/pod-demo-0
name: pod-huang2-0
namespace: huang
- object: # owner pods whose name is default/pod-demo-0
name: pod-huang2-1
namespace: huang
ttl: 1h # set the TTL, the reservation will get expired 1 hour later

-------------------------------------------------reservation-demo3.yaml----------------------------------------------------
apiVersion: scheduling.koordinator.sh/v1alpha1
kind: Reservation
metadata:
name: reservation-huang2
spec:
allocateOnce: false
template: # set resource requirements
namespace: huang
spec:
containers:
- name: stress
image: polinux/stress
imagePullPolicy: IfNotPresent
resources: # reserve 500m cpu and 800Mi memory
requests:
cpu: 500m
memory: 800Mi
limits:
cpu: 500m
memory: 800Mi
schedulerName: koord-scheduler # use koord-scheduler
owners: # set the owner specifications
- object: # owner pods whose name is default/pod-demo-0
name: pod-huang2-0
namespace: huang
- object: # owner pods whose name is default/pod-demo-0
name: pod-huang2-1
namespace: huang
ttl: 1h # set the TTL, the reservation will get expired 1 hour later

[root@kde-offline3 huang]# ls
quota.yaml reservation-demo2.yaml reservation-demo3.yaml reservation-demo.yaml
[root@kde-offline3 huang]# cat reservation-demo3.yaml
apiVersion: scheduling.koordinator.sh/v1alpha1
kind: Reservation
metadata:
name: reservation-huang3
spec:
allocateOnce: false
template: # set resource requirements
namespace: huang
spec:
containers:
- name: stress
image: polinux/stress
imagePullPolicy: IfNotPresent
resources: # reserve 500m cpu and 800Mi memory
requests:
cpu: 500m
memory: 800Mi
limits:
cpu: 500m
memory: 800Mi
schedulerName: koord-scheduler # use koord-scheduler
owners: # set the owner specifications
- object: # owner pods whose name is default/pod-demo-0
name: pod-huang3-0
namespace: huang
- object: # owner pods whose name is default/pod-demo-0
name: pod-huang3-1
namespace: huang
ttl: 1h # set the TTL, the reservation will get expired 1 hour later

What you expected to happen:

Environment:

  • Koordinator version: - v1.4.0
  • Kubernetes version (use kubectl version): v1.22.3
  • docker/containerd version: containerd 1.5.0
  • OS (e.g: cat /etc/os-release): Ubuntu 20.04.4 LTS
  • Kernel (e.g. uname -a): Linux 5.10.112-11.al8.x86_64 ✨ Add NodeMetric API #1 SMP Tue May 24 16:05:50 CST 2022 x86_64 x86_64 x86_64 GNU/Linux

Anything else we need to know:

@zj619 zj619 added the kind/question Support request or question relating to Koordinator label Jun 18, 2024
@zj619 zj619 changed the title [question] [question]reservation对象和namespace配额疑问 Jun 18, 2024
@saintube
Copy link
Member

@zj619 Reservation 对象是 non-namespaced,不过它在调度器内的 fake pod 是依据 template 生成的,可以填写 namespace,从而生效一些 namespace 维度的调度功能。针对你的问题,这里有2点需要注意下:

  1. reservation.spec.template 中的 namespace 填写有误,namespace 字段需要填写在 metadata 字段下。因为 reservation CRD 设置了不会强校验字段合法,因此你这里错误的 namespace 最终也可以提交成功,但实际生产的 fake pod 不会生效指定的 namespace。
  2. reservation 通过生成 fake pod 在调度流程里预留资源,但 fake pod 对外部组件是不可见的,所以 webhook 等流程默认不会处理 reservation。

Copy link

stale bot commented Sep 16, 2024

This issue has been automatically marked as stale because it has not had recent activity.
This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, the issue is closed
    You can:
  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Close this issue or PR with /close
    Thank you for your contributions.

Copy link

stale bot commented Oct 16, 2024

This issue has been automatically closed because it has not had recent activity.
This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, the issue is closed
    You can:
  • Reopen this PR with /reopen
    Thank you for your contributions.

@stale stale bot closed this as completed Oct 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/question Support request or question relating to Koordinator lifecycle/stale
Projects
None yet
Development

No branches or pull requests

2 participants