Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a way to reduce the HAMI-Core verbosity level for workloads? #544

Open
4gt-104 opened this issue Oct 10, 2024 · 6 comments · May be fixed by #563
Open

Is there a way to reduce the HAMI-Core verbosity level for workloads? #544

4gt-104 opened this issue Oct 10, 2024 · 6 comments · May be fixed by #563
Milestone

Comments

@4gt-104
Copy link
Contributor

4gt-104 commented Oct 10, 2024

Please provide an in-depth description of the question you have:

I reviewed the HAMI-Core and confirmed that the verbosity level can be reduced by setting the LIBCUDA_LOG_LEVEL environment variable. However, configuring this for every GPU pod can be tedious.

Is there a way to set the verbosity level through HAMI’s Helm chart or scheduler configuration instead?

What do you think about this question?:
I believe the user should have easy access to configure this parameter, and it could be integrated with the already existing admission webhook. Additionally, I recommend setting the default HAMI-Core verbosity level to 0, ensuring consistent behavior with Nvidia’s device-plugin.

Environment:

  • HAMi version: 2.4.0
  • Kubernetes version: 1.26.5
  • Others: -
@wawa0210
Copy link
Member

There is no good solution at the moment.

If HAMi can try to read global configuration information through webhook, set this parameter. Not sure if it is feasible, need to try

@archlitchi
Copy link
Collaborator

you can modify mutatingWebhookConfiguration in HAMi, add env LIBCUDA_LOG_LEVEL=0 to GPU pods, by the way ,do you have a WeChat or Linkedin account?

@4gt-104
Copy link
Contributor Author

4gt-104 commented Oct 11, 2024

@archlitchi thanks for the reply, I will try to implement setting LIBCUDA_LOG_LEVEL during admission.
Unfortunately I don't have WeChat but I have a linkedin account.

@4gt-104
Copy link
Contributor Author

4gt-104 commented Oct 12, 2024

I have reviewed the code and believe it can be easily implemented, but I have a concern regarding ArgoCD and GitOps. Overriding the pod spec, whether it's to modify the environment variable for visible CUDA devices or any other environment variable, would likely trigger an out-of-sync state.

@archlitchi what do you think?

@4gt-104
Copy link
Contributor Author

4gt-104 commented Oct 13, 2024

I tested various scenarios, and the out-of-sync state is triggered only when bare pod manifests are applied via ArgoCD with set environment variables that can be modified by the admission webhook. Given this, I think adding a note about it in the documentation and proceeding with the environment variable mutation approach would be the best solution.

@archlitchi
Copy link
Collaborator

I tested various scenarios, and the out-of-sync state is triggered only when bare pod manifests are applied via ArgoCD with set environment variables that can be modified by the admission webhook. Given this, I think adding a note about it in the documentation and proceeding with the environment variable mutation approach would be the best solution.

i haven't tried submitting tasks with ArgoCD, i think we can add a field in values.yaml, regarding the log-level, it can be set to 2(which is the default log level, errors, warns and msgs), 0(errors only), 3(errors, warns,msgs and infos), 4(debugs, msgs, infos, warns, errors). we only patch the 'LIBCUDA_LOG_LEVEL' env to container is not set to 2.

4gt-104 added a commit to 4gt-104/HAMi that referenced this issue Oct 19, 2024
@4gt-104 4gt-104 linked a pull request Oct 19, 2024 that will close this issue
@wawa0210 wawa0210 added this to the v2.5 milestone Nov 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants