-
Prepare training dataset: FFHQ. More details are in DatasetPreparation.md
-
Download FFHQ dataset. Recommend to download the tfrecords files from NVlabs/ffhq-dataset.
-
Extract tfrecords to images or LMDBs (TensorFlow is required to read tfrecords):
python scripts/data_preparation/extract_images_from_tfrecords.py
-
-
Modify the config file in
options/train/StyleGAN/train_StyleGAN2_256_Cmul2_FFHQ.yml
-
Train with distributed training. More training commands are in TrainTest.md.
python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/StyleGAN/train_StyleGAN2_256_Cmul2_FFHQ_800k.yml --launcher pytorch
-
Download pre-trained models from ModelZoo (Google Drive, 百度网盘) to the
experiments/pretrained_models
folder. -
Test.
python inference/inference_stylegan2.py
-
The results are in the
samples
folder.
-
Install dlib, because DFDNet uses dlib to do face recognition and landmark detection. Installation reference.
- Clone dlib repo:
git clone [email protected]:davisking/dlib.git
cd dlib
- Install:
python setup.py install
- Clone dlib repo:
-
Download the dlib pretrained models from ModelZoo (Google Drive, 百度网盘) to the
experiments/pretrained_models/dlib
folder.
You can download by run the following command OR manually download the pretrained models.python scripts/download_pretrained_models.py dlib
-
Download pretrained DFDNet models, dictionary and face template from ModelZoo (Google Drive, 百度网盘) to the
experiments/pretrained_models/DFDNet
folder.
You can download by run the the following command OR manually download the pretrained models.python scripts/download_pretrained_models.py DFDNet
-
Prepare the testing dataset in the
datasets
, for example, we put images in thedatasets/TestWhole
folder. -
Test.
python inference/inference_dfdnet.py --upscale_factor=2 --test_path datasets/TestWhole
-
The results are in the
results/DFDNet
folder.