This is the PyTorch implementation of Backdoor Attack is A Devil in Federated GAN-based Medical Image Synthesis.
Deep Learning-based image synthesis techniques have been applied in healthcare research for generating medical images to support open research. Training generative adversarial neural networks (GAN) usually requires large amounts of training data. Federated learning (FL) provides a way of training a central model using distributed data from different medical institutions while keeping raw data locally. However, FL is vulnerable to backdoor attack, an adversarial by poisoning training data, given the central server cannot access the original data directly. Most backdoor attack strategies focus on classification models and centralized domains. In this study, we propose a way of attacking federated GAN (FedGAN) by treating the discriminator with a commonly used data poisoning strategy in backdoor attack classification models. We demonstrate that adding a small trigger with size less than 0.5 percent of the original image size can corrupt the FL-GAN model. Based on the proposed attack, we provide two effective defense strategies: global malicious detection and local training regularization. We show that combining the two defense strategies yields a robust medical image generation.
We are continue working on this project. Thus this repository contains all implementations in the paper and some other configurations beyond. Also, some configurations are hard-coded at this time, e.g. the malicious client index. Waiting to see our updated code.
This project is based on PyTorch 1.10. You can simply set up the environment of stylegan2-ada-pytorch. We also provide 'environment.yml', but it may contain more libraries than needed for this project as we are progressing this project.
python scripts/vanilla.py -n_epochs 200 --batch 32 --attack --save_path ... --data_root ...
python scripts/vanilla.py -n_epochs 200 --batch 32 --attack --outlier_detect --save_path ... --data_root ...
python scripts/gp.py -n_epochs 200 --batch 32 --attack --save_path ... --data_root ...
python scripts/gp.py -n_epochs 200 --batch 32 --attack --outlier_detect --save_path ... --data_root ...
If you find our project to be useful, please cite our paper.
@inproceedings{jin2022backdoor,
title={Backdoor Attack is a Devil in Federated GAN-Based Medical Image Synthesis},
author={Jin, Ruinan and Li, Xiaoxiao},
booktitle={Simulation and Synthesis in Medical Imaging: 7th International Workshop, SASHIMI 2022, Held in Conjunction with MICCAI 2022, Singapore, September 18, 2022, Proceedings},
pages={154--165},
year={2022},
organization={Springer}
}
Our coding and design are referred to the following open source repositories. Thanks to the greate people and their amazing work.