-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Doesn't detect/use gpu #50
Comments
when you starting ./api_inference_server.py device = torch.device('cpu') # default to cpu
use_gpu = torch.cuda.is_available()
print("Detecting GPU...")
if use_gpu:
print("GPU detected!")
device = torch.device('cuda')
print("Using GPU? (Y/N)")
if input().lower() == 'y':
print("Using GPU...")
else:
print("Using CPU...")
use_gpu = False
device = torch.device('cpu') but from what I'm seeing I think it didn't get Detected at all, this could occur in case of your driver problem nvidia-smi nvcc --version both of these should output something that doesn't look like and error |
This Is Legit that you have a driver, but please check if your driver is the correct version. and also with PyTorch installation if you installed it with GPU support |
The newest Nvidia drivers (536.67) probably doesn't support the cuda version I have. I'll try to see if rolling back the drivers would fix it. |
It still doesn't detect the gpu after rolling back to previous drivers and installing pytorch with cuda 11.8 |
this maybe a little bit troublesome but python import torch
print(torch.cuda.is_available()) if it's return true? |
this maybe a little bit troblesome python # you are in python shell
import torch
print(torch.cuda.is_available()) if it's return true? |
(venv) PS E:\etc\AIwaifu> python Warning: Type "help", "copyright", "credits" or "license" for more information.
I also double checked if torch is installed (venv) PS E:\etc\AIwaifu> pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 |
try using poetry installation method provided in the readme instead |
Most of the file installations worked fine but pyopenjtalk caused errors (E:\etc\AIwaifu\envs) PS E:\etc\AIwaifu> poetry install Package operations: 1 install, 0 updates, 0 removals • Installing pyopenjtalk (0.3.0) ChefBuildError Backend subprocess exited when trying to invoke get_requires_for_build_wheel Traceback (most recent call last): at envs\lib\site-packages\poetry\installation\chef.py:147 in _prepare Note: This error originates from the build backend, and is likely not a problem with poetry but with pyopenjtalk (0.3.0) not supporting PEP 517 builds. You can verify this by running 'pip wheel --use-pep517 "pyopenjtalk (==0.3.0)"'. (E:\etc\AIwaifu\envs) PS E:\etc\AIwaifu> |
try poetry run pip install --no-use-pep517 pyopenjtalk==0.3.0 |
Doesn't work either × python setup.py egg_info did not run successfully. note: This error originates from a subprocess, and is likely not a problem with pip. × Encountered error while generating package metadata. note: This is an issue with the package mentioned above, not pip. |
I'll try to solve this at my end, please bear it until next released |
I have this gpu and I'll take a look and see if I can reproduce this issue |
Thx! please notify me in Discord if you saw what's the problem and don't have enough time to fix it! |
I've got my 4080 16GB delivered now |
everything works fine except its using my cpu instead of the gpu.
The text was updated successfully, but these errors were encountered: