You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 21, 2023. It is now read-only.
Running the latest docker container with the nvidia container runtime nvidia-smi returns and shows the graphics card as available and ready.
You can run larynx from the command line inside of the container without error.
But as soon as you pass the cuda flag in
^C(.venv) root@larynx-dd4858485-t9dj2:/home/larynx/app/larynx# python -m larynx --cuda
Traceback (most recent call last):
File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/larynx/app/larynx/__main__.py", line 750, in <module>
main()
File "/home/larynx/app/larynx/__main__.py", line 66, in main
import torch
ModuleNotFoundError: No module named 'torch'
Similar errors occur if you attempt to start the container with the cuda flag as an additional argument.
By executing into the container and using the venv that exists I was able to install torch and then run the command.
I believe the build container has an issue here https://github.com/rhasspy/larynx/blob/master/Dockerfile#L42 as my knowledge of python is limited it appears that the intent is to use a precompiled version of torch that you are providing, but it does not appear to actually be making it into the container.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Running the latest docker container with the nvidia container runtime nvidia-smi returns and shows the graphics card as available and ready.
You can run larynx from the command line inside of the container without error.
But as soon as you pass the cuda flag in
Similar errors occur if you attempt to start the container with the cuda flag as an additional argument.
By executing into the container and using the venv that exists I was able to install torch and then run the command.
I believe the build container has an issue here https://github.com/rhasspy/larynx/blob/master/Dockerfile#L42 as my knowledge of python is limited it appears that the intent is to use a precompiled version of torch that you are providing, but it does not appear to actually be making it into the container.
The text was updated successfully, but these errors were encountered: