CTGAN - cuda=TRUE; multiple GPU training #1500
Labels
question
General question about the software
resolution:duplicate
This issue or pull request already exists
I am wondering whether the CTGAN model allows for training/sampling using multiple GPUs on a node.
I am training a CTGAN model (doing hyper-parameter optimization) on the following HPC/GPU cluster.
https://docs.scinet.utoronto.ca/index.php/Mist
I have access to up to 4 GPUs per node. If possible I wonder if I can exploit the multiple GPU availability in CTGAN??
I am using PyTorch backend.
I have moved from using CPU training (cuda=False), to single GPU training (cuda=True) arguments.
With cuda=True, will sdv/CTGAN simply recognize the multiple GPU available, if I request them on the cluster?
Or is there another argument (or additional syntax) that needs to be used to enable multi-GPU training/sampling?
Thanks in advance for the advice!
Chris
Environment details
If you are already running SDV, please indicate the following details about the environment in
which you are running it:
Problem description
<Replace this with a description of the problem that you are trying to solve using SDV. If
possible, describe the data that you are using, or consider attaching some example data
that others can use to propose a working solution for your problem.>
What I already tried
<Replace with a description of what you already tried and what is the behavior that you observe.
If possible, also add below the exact code that you are running.>
The text was updated successfully, but these errors were encountered: