Skip to content

Commit

Permalink
multi GPU doc on PyTorch
Browse files Browse the repository at this point in the history
  • Loading branch information
albertz authored Nov 29, 2023
1 parent 832ffb7 commit 4bb98be
Showing 1 changed file with 17 additions and 4 deletions.
21 changes: 17 additions & 4 deletions docs/advanced/multi_gpu.rst
Original file line number Diff line number Diff line change
@@ -1,8 +1,22 @@
.. _multi_gpu:

==================
Multi GPU training
==================
See `our wiki for an overview of possible distributed training variants <https://github.com/rwth-i6/returnn/wiki/Distributed-training-experience>`__.


===============================
Multi GPU training with PyTorch
===============================

Main configuration option is ``torch_distributed``.

Example: Just put ``torch_distributed = {}`` into the config. This will by default use PyTorch ``DistributedDataParallel``.

See `our wiki on distributed PyTorch <https://github.com/rwth-i6/returnn/wiki/Distributed-PyTorch>`__.


==================================
Multi GPU training with TensorFlow
==================================

This is about multi GPU training with the TensorFlow backend.

Expand All @@ -23,7 +37,6 @@ Please refer to `our wiki for an overview of distributed TensorFlow <https://git

Also see :mod:`returnn.tf.distributed`.

See `our wiki for an overview of possible distributed training variants <https://github.com/rwth-i6/returnn/wiki/Distributed-training-experience>`__.

------------
Installation
Expand Down

0 comments on commit 4bb98be

Please sign in to comment.