Skip to content
/ AIM_FAS Public

Implementation of the paper "Learning Meta Model for Zero- and Few-shot Face Anti-spoofing"

License

Notifications You must be signed in to change notification settings

qyxqyx/AIM_FAS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

#This code is the implementation of the paper "Learning Meta Model for Zero- and Few-shot Face Anti-spoofing"

The image lists of the protocol OULU-ZF is included in the protocols folder, and the protocol Cross-ZF will come soon.

Env required: tensorflow >= 1.4.0, opencv >= 3.0.0, etc.

To run this code, you should first use PRNet or other tools to generate facical depth map as the labels of all living faces. The labels of all spoofing faces are a zero array with shape of [32, 32, 1].

Data structure: Each set of the train, val, and test sets contains fine-grained living and spoofing face types. For each fine-grained type, it contains several facial images and the corresponding facial box files and facial depth map images. The image, the facial box file, and the facial depth image should be end-with '_scene.jpg', '_scene.dat', and '_depth1D.jpg', respectively.

@article{qin2020learning, title={Learning meta model for zero-and few-shot face antispoofing}, author={Qin, Yunxiao and Zhao, Chenxu and Zhu, Xiangyu and Wang, Zezheng and Yu, Zitong and Fu, Tianyu and Zhou, Feng and Shi, Jingping and Lei, Zhen}, journal={Association for Advancement of Artificial Intelligence (AAAI)}, year={2020} }

About

Implementation of the paper "Learning Meta Model for Zero- and Few-shot Face Anti-spoofing"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages