Skip to content

Commit

Permalink
add warning parallel training print
Browse files Browse the repository at this point in the history
  • Loading branch information
kilianFatras committed Dec 13, 2023
1 parent 2ef0106 commit 078361c
Showing 1 changed file with 3 additions and 0 deletions.
3 changes: 3 additions & 0 deletions examples/cifar10/train_cifar10.py
Original file line number Diff line number Diff line change
Expand Up @@ -105,6 +105,9 @@ def train(argv):
optim = torch.optim.Adam(net_model.parameters(), lr=FLAGS.lr)
sched = torch.optim.lr_scheduler.LambdaLR(optim, lr_lambda=warmup_lr)
if FLAGS.parallel:
print(
"Warning: parallel training is performing slighlty worse than single GPU training due to statistics computation in dataparallel. We recommend to train over a single GPU, which requires around 8 Gb of GPU memory."
)
net_model = torch.nn.DataParallel(net_model)
ema_model = torch.nn.DataParallel(ema_model)

Expand Down

0 comments on commit 078361c

Please sign in to comment.