-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cpu resources of gpu-large and gpu-small presets #322
Comments
https://docs.python.org/3/library/os.html#os.cpu_count
8 and 32 - is host cpu count. Our configuration depends from cloud provider instance configuration. |
@atselousov ,you are right. @mariyadavydova, do we need to clarify it in documentation somewhere? @dalazx, we can use integers for cpu requests/limits. In this case exclusive cores will be assigned to task instead shared: https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/ . It can be used i.e. for some linear calculation(cpu=1). Do we need this functionality? |
@shagren we could potentially expose an option, but I am not sure this is needed indeed. |
Does a job with
gpu-large
preset have lesscpu
resources than a job withgpu-small
preset?It looks so. The output of
multiprocessing.cpu_count()
shows only 8 availablecpu
in the system forgpu-large
preset, but forgpu-small
it shows 32 availablecpu
. There are screenshots:for
gpu-large
presetfor
gpu-small
presetWhy are the outputs different for these presets? What does
#CPU
column mean in the output of the commandneuro config show
? I thought that column shows how manycpu
resources were set (e.g. like here:neuro submit --cpu 2 ...
), but as I can see it does not.The text was updated successfully, but these errors were encountered: