FID measures the similarity between two datasets of images. It was shown to correlate well with human judgement of visual quality and is most often used to evaluate the quality of samples of Generative Adversarial Networks. FID is calculated by computing the Fréchet distance between two Gaussians fitted to feature representations of the Inception network.
References
- https://github.com/mseitzer/pytorch-fid
- GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium
- Are GANs Created Equal? A Large-Scale Study
Usually, we put the downloaded inception feature statistics in basicsr/metrics
.
⏬ Google Drive: metrics data
⏬ 百度网盘: 评价指标数据
File Name | Dataset | Image Shape | Sample Numbers |
---|---|---|---|
inception_FFHQ_256-0948f50d.pth | FFHQ | 256 x 256 | 50,000 |
inception_FFHQ_512-f7b384ab.pth | FFHQ | 512 x 512 | 50,000 |
inception_FFHQ_1024-75f195dc.pth | FFHQ | 1024 x 1024 | 50,000 |
inception_FFHQ_256_stylegan2_pytorch-abba9d31.pth | FFHQ | 256 x 256 | 50,000 |
- All the FFHQ inception feature statistics calculated on the resized 299 x 299 size.
inception_FFHQ_256_stylegan2_pytorch-abba9d31.pth
is converted from the statistics in stylegan2-pytorch.