-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is multi-gpu training available? #352
Comments
The You can build graphs and explicitly assign portions to different "dml" devices in TF 1.15 (e.g. using tf.device), but full coverage of all the various multi- or distributed-GPU features in TF1 isn't complete. We decided to focus our efforts on the TF2 pluggable device model first: if we do support multi-GPU more robustly it will likely be in TF2 where the core runtime is built with additional non-CPU/CUDA plugin-based device backends in mind. |
@jstoecker: Do you have any schedule of when to finish a complete support of TF2 and in turn of multi- or distributed-GPU? Is it possible to have it at the end of 2022? |
We intend to have a TF2 package released in the next few months with comparable functional coverage to what's in TF1, but it's still too early to say if multi-gpu will follow immediately after this package. Adding @PatriceVignola to this discussion in case anything has or will change on prioritization here. |
So, the support for tensorflow 1.x seems to be almost complete, is the function multi_gpu from utils working? This is something that I'm looking forward, any information on this would be extremely important.
Would something like this work?
model = multi_gpu_model(model, gpus=2)
The text was updated successfully, but these errors were encountered: