Deep learning-based models have been shown to improve the accuracy of fingerprint recognition. While these algorithms show exceptional performance, they require large-scale fingerprint datasets for training and evaluation. In this work, we propose a novel fingerprint synthesis and reconstruction framework based on the StyleGan2 architecture, to address the privacy issues related to the acquisition of such large-scale datasets. We also derive a computational approach to modify the attributes of the generated fingerprint while preserving their identity. This allows synthesizing multiple different fingerprint images per finger. In particular, we introduce the SynFing synthetic fingerprints dataset consisting of 100K image pairs, each pair corresponding to the same identity. The proposed framework was experimentally shown to outperform contemporary state-of-the-art approaches for both fingerprint synthesis and reconstruction. It significantly improved the realism of the generated fingerprints, both visually and in terms of their ability to spoof fingerprint-based verification systems.
Official Implementation of the paper "Synthesis and Reconstruction of Fingerprints using Generative Adversarial Networks. This repository contains both training and inference scripts.
2022.02.08
: Initial code release.
Here, we use out framework to generate real-looking fingerprint images using our StyleGAN-based generator trained on NISTSD4 dataset.
In this application we want to reconstruct a fingerprint image for it's minutiae information.
Here, we are using out Fingerprint attribute editor, based on SeFa [2] algorithm for editing visual attributes of a generated fingerprint.
- Linux
- NVIDIA GPU + CUDA CuDNN (CPU may be possible with some modifications, but is not inherently supported)
- Python 3
- torch == 1.6
- Clone this repo:
git clone https://github.com/rafaelbou/fingerprint-generator.git
cd fingerprint-generator
- Dependencies:
We recommend running this repository using Anaconda.
All dependencies for defining the environment are provided inenvironment/environment.yaml
.
Please download the pre-trained models from the following links.
Path | Description |
---|---|
fingerprint_synthesis | Model trained with the NIST SD14 dataset for fingerprint synthesis. |
fingerprint_reconstruction | Model trained with the NIST SD14 dataset for fingerprint reconstruction from minutiae set. |
If you wish to use one of the pretrained models for training or inference, you may do so using the flag --checkpoint_path
.
The SynFing dataset consists of 100K pairs of synthetic rolled fingerprints created using the proposed fingerprint generator and attributes modifier. Each pair of impressions shares the same synthetic identity but differs in visual attributes, such as scribbles and dry-skin artifacts.
SynFing dataset is available at:
In order to train the Minutiae-To-Vec encoder, you need to convert a minutiae set into a minutiae map.
The minutiae set should be in text file which each row represent a minutia point in the format:
minutia-type x y orientation
- minutiae-type: 1=Bifurcation, 2=Termination, 4=Loop, 5=Delta
- x,y: The coordinate of the point in pixels.
- orientation: The orientation of point in degrees.
For the minutiae maps creation run the script in ./utils/preprocessing_utils.py
.
- Currently, we provide support for Synthesis and Reconstruction datasets and experiments.
- Refer to
configs/paths_config.py
to define the necessary data paths and model paths for training and evaluation. - Refer to
configs/transforms_config.py
for the transforms defined for each dataset/experiment. - Finally, refer to
configs/data_configs.py
for the source/target data paths for the train and test sets as well as the transforms.
- Refer to
- If you wish to experiment with your own dataset, you can simply make the necessary adjustments in
data_configs.py
to define your data paths.transforms_configs.py
to define your own data transforms.
The main training scripts can be found in scripts/train_generator.py
and scripts/train_mnt_encoder.py
for synthesis an reconstruction tasks, respectively.
Intermediate training results are saved to opts.exp_dir
. This includes checkpoints, train outputs, and test outputs.
Additionally, if you have tensorboard installed, you can visualize tensorboard logs in opts.exp_dir/logs
.
python scripts/train_generator.py
--exp_dir=<OUTPUT FOLDER PATH>
--generator_image_size=256
--batch_size=4
--is_gray
--augment
--image_interval=2500
--save_interval=5000
python scripts/train_mnt_encoder.py
--exp_dir=<OUTPUT FOLDER PATH>
--dataset_type=nist_sd14_mnt
--stylegan_weights=<PATH TO PRETRAINED STYLEGAN2 MODEL>
--generator_image_size=256
--style_count=14
--label_nc=1
--input_nc=3
--lpips_lambda=0.8
--l2_lambda=1
--fingernet_lambda=1
--workers=6
--batch_size=6
--test_batch_size=6
--test_workers=6
--val_interval=2500
--save_interval=5000
- See
options/train_options.py
for all training-specific flags. - If you wish to resume from a specific checkpoint, you may do so using
--checkpoint_path
.
The main inference scripts can be found in scripts/inference_generator.py
and scripts/inference_mnt_encoder.py
for synthesis and reconstruction tasks, respectively.
python scripts/inference_generator.py \
--exp_dir=<OUTPUT FOLDER PATH>
--checkpoint_path=<PATH TO PRETRAINED STYLEGAN2 MODEL>
--is_gray
--n_image=20
python scripts/inference_mnt_encoder.py \
--exp_dir=<OUTPUT FOLDER PATH>
--checkpoint_path=<PATH TO PRETRAINED SYNFING MODEL (Mnt-To-Vec Encoder + Fingerprint-Generator)>
--data_path=<PATH TO MNT MAPS FOLDER>
--resize_output
- See
options/test_options.py
for all test-specific flags. - During inference, the options used during training are loaded from the saved checkpoint and are then updated using the
test options passed to the inference script. For example, there is no need to pass
--dataset_type
or--label_nc
to the inference script, as they are taken from the loadedopts
.
The main scripts can be found in fingerprint_attribute_editor/closed_form_factorization.py
and fingerprint_attribute_editor/attribute_editor.py
for calculating and applying the latent semantic directions, respectively.
This script is used to estimate the latent semantic directions in w that modify particular fingerprint attributes while preserving their identity
python fingerprint_attribute_editor/closed_form_factorization.py
--exp_dir=<OUTPUT FOLDER PATH>
--checkpoint_path=<PATH TO PRETRAINED STYLEGAN2 MODEL>
This script is used to apply on of the latent semantic directions in order to edit the generated fingerprint attributes.
python fingerprint_attribute_editor/attribute_editor.py
--exp_dir=<OUTPUT FOLDER PATH>
--checkpoint_path=<PATH TO PRETRAINED STYLEGAN2 MODEL>
--factor_path=<PATH TO facor.pt FILE (the output of closed_form_factorization.py>
--index=22
--degree=5
--n_sample=1
--is_gray
--resize_factor=512
--number_of_outputs=5
- The output of the closed_form_factorization.py script, factor.pt file, will be saved to exp_dir.
- The attribute_editor.py script will output three different images, backward, original and forward, for each generated fingerprint.
- See
fingerprint_attribute_editor/attribute_editor_options.py
for all attribute_editor flags.
StyleGAN2 implementation:
https://github.com/rosinality/stylegan2-pytorch
Copyright (c) 2019 Kim Seonghyeon
License (MIT) https://github.com/rosinality/stylegan2-pytorch/blob/master/LICENSE
pixel2style2pixel implementation:
https://github.com/eladrich/pixel2style2pixel
Copyright (c) 2020 Elad Richardson, Yuval Alaluf
License (MIT) https://github.com/eladrich/pixel2style2pixel/blob/master/LICENSE
If you use this code for your research, please cite our paper Synthesis and Reconstruction of Fingerprints using Generative Adversarial Networks:
@article{bouzaglo2022synthesis,
title={Synthesis and Reconstruction of Fingerprints using Generative Adversarial Networks},
author={Bouzaglo, Rafael and Keller, Yosi},
journal={arXiv preprint arXiv:2201.06164},
year={2022}
}