Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker에서 gpu사용 #28

Open
sooperset opened this issue May 23, 2019 · 2 comments
Open

docker에서 gpu사용 #28

sooperset opened this issue May 23, 2019 · 2 comments

Comments

@sooperset
Copy link

안녕하세요
nvidia-docker를 이용해서
--device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidiactl:/dev/nvidiactl --device /dev/nvidia-uvm:/dev/nvidia-uvm
gpu device들을 연결시켜주고 실행을 시켰는데,

python -c "from tensorflow.python.client import device_lib; local_device_protos = device_lib.list_local_devices(); print([x.name for x in local_device_protos if x.device_type == 'GPU'])"
명령어를 입력하였을 때,

2019-05-23 23:26:34.907318: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-05-23 23:26:34.937638: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2399875000 Hz
2019-05-23 23:26:34.943818: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x55c7c3ae6970 executing computations on platform Host. Devices:
2019-05-23 23:26:34.943838: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): ,

gpu는 못찾고 cpu만 나와서 예제 코드를 실습할 때도 cpu를 이용해서 학습을 하고 있습니다.

tensorflow/tensorflow:latest-gpu-py3 도커 이미지를 이용하였을 때는

name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:02:00.0
totalMemory: 10.92GiB freeMemory: 10.76GiB
2019-05-23 14:25:30.391164: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 1 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:03:00.0
totalMemory: 10.92GiB freeMemory: 10.76GiB

라고 뜨면서 정상 작동하는 걸 볼 수 있었습니다.

gpu를 도커에서 사용하려면 이 외 어떤 방법이 있나요?

@sooperset
Copy link
Author

import torch

torch.backends.cudnn.enabled 에서는 True가,

torch.cuda.is_available()에서는 False가 나옵니다.

@HERIUN
Copy link

HERIUN commented Oct 8, 2020

저도 확실하진 않은데, 현재 쓰는 PC OS가 window이면, docker에서 gpu를 사용하는것은 불가능한걸로 알고 있습니다. 우분투면 가능한거 같아요.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants