Skip to content

Latest commit

 

History

History
80 lines (55 loc) · 3.9 KB

README.md

File metadata and controls

80 lines (55 loc) · 3.9 KB

ArcFace

ArcFace: Additive Angular Margin Loss for Deep Face Recognition

Abstract

Recently, a popular line of research in face recognition is adopting margins in the well-established softmax loss function to maximize class separability. In this paper, we first introduce an Additive Angular Margin Loss (ArcFace), which not only has a clear geometric interpretation but also significantly enhances the discriminative power. Since ArcFace is susceptible to the massive label noise, we further propose sub-center ArcFace, in which each class contains K sub-centers and training samples only need to be close to any of the K positive sub-centers. Sub-center ArcFace encourages one dominant sub-class that contains the majority of clean faces and non-dominant sub-classes that include hard or noisy faces. Based on this self-propelled isolation, we boost the performance through automatically purifying raw web faces under massive real-world noise. Besides discriminative feature embedding, we also explore the inverse problem, mapping feature vectors to face images. Without training any additional generator or discriminator, the pre-trained ArcFace model can generate identity-preserved face images for both subjects inside and outside the training data only by using the network gradient and Batch Normalization (BN) priors. Extensive experiments demonstrate that ArcFace can enhance the discriminative feature embedding as well as strengthen the generative face synthesis.

How to use it?

Retrieve image

from mmpretrain import ImageRetrievalInferencer

inferencer = ImageRetrievalInferencer('resnet50-arcface_inshop', prototype='demo/')
predict = inferencer('demo/dog.jpg', topk=2)[0]
print(predict[0])
print(predict[1])

Use the model

import torch
from mmpretrain import get_model

model = get_model('resnet50-arcface_inshop', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))

Train/Test Command

Prepare your dataset according to the docs.

Train:

python tools/train.py configs/arcface/resnet50-arcface_8xb32_inshop.py

Test:

python tools/test.py configs/arcface/resnet50-arcface_8xb32_inshop.py https://download.openmmlab.com/mmclassification/v0/arcface/resnet50-arcface_inshop_20230202-b766fe7f.pth

Models and results

Image Retrieval on InShop

Model Pretrain Params(M) Flops(G) Recall@1 mAP@10 Config Download
resnet50-arcface_inshop ImageNet-21k-mill 31.69 16.48 90.18 69.30 config model | log

Citation

@inproceedings{deng2018arcface,
title={ArcFace: Additive Angular Margin Loss for Deep Face Recognition},
author={Deng, Jiankang and Guo, Jia and Niannan, Xue and Zafeiriou, Stefanos},
booktitle={CVPR},
year={2019}
}