- google drive (link and link) or SNU CVLab Server (link and link)
- it should be like
./datasets/REDS/train/train_blur_jpeg
and./datasets/REDS/train/train_sharp
python scripts/data_preparation/reds.py
to make the data into lmdb format.
- google drive or 百度网盘,
- it should be like
./datasets/REDS/val/blur_300.lmdb
and./datasets/REDS/val/sharp_300.lmdb
-
NAFNet-REDS-width64:
python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/REDS/NAFNet-width64.yml --launcher pytorch
-
8 gpus by default. Set
--nproc_per_node
to # of gpus for distributed validation.
- NAFNet-REDS-width64: google drive or 百度网盘
- NAFNet-REDS-width64:
python -m torch.distributed.launch --nproc_per_node=1 --master_port=4321 basicsr/test.py -opt ./options/test/REDS/NAFNet-width64.yml --launcher pytorch
- Test by a single gpu by default. Set
--nproc_per_node
to # of gpus for distributed validation.