Skip to content

Commit

Permalink
Update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
garg-aayush authored May 11, 2022
1 parent 4b71614 commit f8a3d20
Showing 1 changed file with 21 additions and 2 deletions.
23 changes: 21 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,17 +26,26 @@ This repo provides different pytorch implementation for training a deep learning
```

## Single-GPU implementation
This is a very vanilla [pytorch](https://pytorch.org/) implementation that can either run on a CPU or a single GPU. The code uses own simple functions to log different metrics, print out info at run time and save the model at the end of the run. Furthermore, the [Argparse](https://docs.python.org/3/library/argparse.html) module is used to parse the arguments through commandline.

<details>
<summary><b>Arguments that can be passed through commandline</b></summary>

> Use `python <python_file> -h` to see the available parser arguments for any script.
```
usage: train_simple.py [-h] --run_name RUN_NAME [--random_seed RANDOM_SEED]
[-et EPOCHS_PER_TEST] [-ep EPOCHS] [-bs BATCH_SIZE]
[-w NUM_WORKERS] [--learning_rate LEARNING_RATE]
[--weight_decay WEIGHT_DECAY] [--momentum MOMENTUM]
[--gamma GAMMA]
required arguments:
--run_name RUN_NAME
optional arguments:
-h, --help show this help message and exit
--run_name RUN_NAME
--random_seed RANDOM_SEED
-et EPOCHS_PER_TEST, --epochs_per_test EPOCHS_PER_TEST
Number of epochs per test/val
-ep EPOCHS, --epochs EPOCHS
Expand All @@ -50,7 +59,11 @@ optional arguments:
--momentum MOMENTUM Momentum value in SGD.
--gamma GAMMA gamma value for MultiStepLR.
```
</details>

<details>
<summary><b>Running the script</b></summary>

```
# Start training with default parameters:
python train_simple.py --run_name=test_single
Expand All @@ -61,8 +74,14 @@ python train_simple.py -bs=64 -ep=2 --run_name=test_single
# You can also set parameters run_simple.sh file and start the training as following:
source train_simple.py
```

</details>

NOTE: remember to set the data folder path (`DATASET_PATH`) and model checkpoint path (`CHECKPOINT_PATH`) in the `train_simple.py`


## Multi-GPU implementation
This is a very vanilla [pytorch](https://pytorch.org/) implementation that can either run on a CPU or a single GPU. The code uses own simple functions to log different metrics, print out info at run time and save the model at the end of the run. Furthermore, the [Argparse](https://docs.python.org/3/library/argparse.html) module is used to parse the arguments through commandline.
```
# Training with default parameters and 2 GPU:
python -m torch.distributed.launch --nproc_per_node=2 --master_port=9995 train_multi.py --run_name=test_multi
Expand Down Expand Up @@ -107,4 +126,4 @@ pip install -r requirements.txt
```

## Feedback
To give feedback or ask a question or for environment setup issues, you can use the [Github Discussions](https://https://github.com/garg-aayush/pytorch-pl-hydra-templates/discussions).
To give feedback or ask a question or for environment setup issues, you can use the [Github Discussions](https://https://github.com/garg-aayush/pytorch-pl-hydra-templates/discussions).

0 comments on commit f8a3d20

Please sign in to comment.