Skip to content

Commit

Permalink
Merge pull request #23 from chairc/dev
Browse files Browse the repository at this point in the history
Update: The utils.initializer encapsulation method is used in distributed training.
  • Loading branch information
chairc authored Dec 18, 2023
2 parents d688883 + bfbf9fd commit 633e325
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion tools/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ def train(rank=None, args=None):
dist.init_process_group(backend="nccl" if torch.cuda.is_available() else "gloo", rank=rank,
world_size=world_size)
# Set device ID
device = torch.device("cuda", rank)
device = device_initializer(device_id=rank)
# There may be random errors, using this function can reduce random errors in cudnn
# torch.backends.cudnn.deterministic = True
# Synchronization during distributed training
Expand Down

0 comments on commit 633e325

Please sign in to comment.