-
准备训练数据集: FFHQ. 更多细节: DatasetPreparation_CN.md
-
下载 FFHQ 数据集. 推荐从 NVlabs/ffhq-dataset 下载 tfrecords 文件.
-
从tfrecords 提取到图片或者LMDB. (需要安装 TensorFlow 来读取 tfrecords).
python scripts/data_preparation/extract_images_from_tfrecords.py
-
-
修改配置文件
options/train/StyleGAN/train_StyleGAN2_256_Cmul2_FFHQ.yml
-
使用分布式训练. 更多训练命令: TrainTest_CN.md
python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/StyleGAN/train_StyleGAN2_256_Cmul2_FFHQ.yml --launcher pytorch
-
从 ModelZoo (Google Drive, 百度网盘) 下载预训练模型到
experiments/pretrained_models
文件夹. -
测试.
python inference/inference_stylegan2.py
-
结果在
samples
文件夹
-
安装 dlib. 因为 DFDNet 使用 dlib 做人脸检测和关键点检测. 安装参考.
- 克隆 dlib repo:
git clone [email protected]:davisking/dlib.git
cd dlib
- 安装:
python setup.py install
- 克隆 dlib repo:
-
从 ModelZoo (Google Drive, 百度网盘) 下载预训练的 dlib 模型到
experiments/pretrained_models/dlib
文件夹.
你可以通过运行下面的命令下载 或 手动下载.python scripts/download_pretrained_models.py dlib
-
从 ModelZoo (Google Drive, 百度网盘) 下载 DFDNet 模型, 字典和人脸关键点模板到
experiments/pretrained_models/DFDNet
文件夹.
你可以通过运行下面的命令下载 或 手动下载.python scripts/download_pretrained_models.py DFDNet
-
准备测试图片到
datasets
, 比如说我们把测试图片放在datasets/TestWhole
文件夹. -
测试.
python inference/inference_dfdnet.py --upscale_factor=2 --test_path datasets/TestWhole
-
结果在
results/DFDNet
文件夹.
We take the classical SR X4 with DIV2K for example.
-
Prepare the training dataset: DIV2K. More details are in DatasetPreparation.md
-
Prepare the validation dataset: Set5. You can download with this guidance
-
Modify the config file in
options/train/SwinIR/train_SwinIR_SRx4_scratch.yml
accordingly. -
Train with distributed training. More training commands are in TrainTest.md.
python -m torch.distributed.launch --nproc_per_node=8 --master_port=4331 basicsr/train.py -opt options/train/SwinIR/train_SwinIR_SRx4_scratch.yml --launcher pytorch --auto_resume
Note that:
- Different from the original setting in the paper where the X4 model is finetuned from the X2 model, we directly train it from scratch.
- We also use
EMA (Exponential Moving Average)
. Note that all model trainings in BasicSR supports EMA. - In the 250K iteration of training X4 model, it can achieve comparable performance to the official model.
ClassicalSR DIV2KX4 | PSNR (RGB) | PSNR (Y) | SSIM (RGB) | SSIM (Y) |
---|---|---|---|---|
Official | 30.803 | 32.728 | 0.8738 | 0.9028 |
Reproduce | 30.832 | 32.756 | 0.8739 | 0.9025 |
-
Download pre-trained models from the official SwinIR repo to the
experiments/pretrained_models/SwinIR
folder. -
Inference.
python inference/inference_swinir.py --input datasets/Set5/LRbicx4 --patch_size 48 --model_path experiments/pretrained_models/SwinIR/001_classicalSR_DIV2K_s48w8_SwinIR-M_x4.pth --output results/SwinIR_SRX4_DIV2K/Set5
-
The results are in the
results/SwinIR_SRX4_DIV2K/Set5
folder. -
You may want to calculate the PSNR/SSIM values.
python scripts/metrics/calculate_psnr_ssim.py --gt datasets/Set5/GTmod12/ --restored results/SwinIR_SRX4_DIV2K/Set5 --crop_border 4
or test with the Y channel with the
--test_y_channel
argument.python scripts/metrics/calculate_psnr_ssim.py --gt datasets/Set5/GTmod12/ --restored results/SwinIR_SRX4_DIV2K/Set5 --crop_border 4 --test_y_channel