Skip to content
This repository has been archived by the owner on Feb 12, 2022. It is now read-only.

Uses "--no-cuda" and implements device-agnostic code #81

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

nkcr
Copy link

@nkcr nkcr commented Nov 12, 2018

This pull request adds two contributions:

  1. Changes the input arguments --cuda to --no-cuda, with consistent behavior
  2. Follows Pytorch best practice for device agnostic code, which makes the code cleaner and executions compatible between CPU and GPU runs (loading on a CPU a model trained on a GPU and vice-versa).

In summary, I added a new utility function init_device() in utils.py, which adds to args a device property. This property is a torch.device() object which indicates where tensors and models should reside. We can then write code without knowing if the computation is on the CPU or GPU. For example with a tensor: data = torch.rand(10, device=args.device) or a model: model = model.to(args.device). This completely removes the need to write conditional blocks.

If using the "--no-cuda" (or there is no CUDA device available), the device is set to the CPU.

This version was checked against the Experiments with consistent results.
Related to #23 and #6

This utility method sets the `args.device` parameter, which indicates if the computation will be made on the GPU using CUDA or the CPU. To choose between the two, the method checks the host compatibility and the `args.no_cuda` parameters.
With the "args.device", one can initialize tensors or models without knowing if it should use the CPU or GPU. For example, initializing a tensor: `torch.rand(10, device=args.device), or model: `model = model().to(args.device)`.

If CUDA is used and `args.seed` exists, the CUDA seed is manually set.
Using "args.device" removes the need for an if block, making the code lighter.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant