This is the PyTorch implementation of Backdoor Attack on Unpaired Medical Image-Text Foundation Models: A Pilot Study on MedCLIP.
In recent years, foundation models (FMs) have solidified their role as cornerstone advancements in the deep learning domain. By extracting intricate patterns from vast datasets, these models consistently achieve state-of-the-art results across a spectrum of downstream tasks, all without necessitating extensive computational resources [1]. Notably, MedCLIP [2], a vision-language contrastive learning-based medical FM, has been designed using unpaired image-text training. While the medical domain has often adopted unpaired training to amplify data [3], the exploration of potential security concerns linked to this approach hasn’t kept pace with its practical usage. Notably, the augmentation capabilities inherent in unpaired training also indicate that minor label discrepancies can result in significant model deviations. In this study, we frame this label discrepancy as a backdoor attack problem. We further analyze its impact on medical FMs throughout the FM supply chain. Our evaluation primarily revolves around MedCLIP, emblematic of medical FM employing the unpaired strategy. We begin with an exploration of vulnerabilities in MedCLIP stemming from unpaired imagetext matching, termed BadMatch. BadMatch is achieved using a modest set of wrongly labeled data. Subsequently, we disrupt MedCLIP’s contrastive learning through BadDist-assisted BadMatch by introducing a Bad-Distance between the embeddings of clean and poisoned data. Intriguingly, when BadMatch and BadDist are combined, a slight 0.05 percent of misaligned image-text data can yield a staggering 99 percent attack success rate, all the while maintaining MedCLIP’s efficacy on untainted data. Additionally, combined with BadMatch and BadDist, the attacking pipeline consistently fends off backdoor assaults across diverse model designs, datasets, and triggers. Also, our findings reveal that current defense strategies are insufficient in detecting these latent threats in medical FMs’ supply chains.
We release our pretrained models below.
Model Name | Link |
---|---|
ViT-COVID-Patch | pytorch_model |
ResNet-RSNA-Patch | pytorch_model |
ViT-COVID-Fourier | pytorch_model |
This project is based on PyTorch 1.10. You can simply set up the environment of MedCLIP. We also provide environment.yml
.
All of our data and meta-data are same as MedCLIP, please follow their instruction to download and prepare for the data. We provide the csv meta-data below (put it into the local_data folder)
Dataset Name | Link |
---|---|
MIMIC | mimic-train-meta.csv |
COVID | covid-test-meta.csv |
RSNA | rsna-test-meta.csv |
Note: change /path/to/your/data
in each *.csv to the actual folder on your local disk. Before downloading sentence label from MIMIC dataset, make sure you have an approved license on the physionet, which is required for access any content for MIMIC.
python scripts/train.py
An example is also given in the script.
python scripts/zero_shot.py
An example is also given in the script.
evaluation1 = MainEvaluator(use_vit=True, # True if use ViT else ResNet
backdoor="none", # "none" for no backdoor attack, "patch" for badnet trigger, "fourier" for fourier trigger
trigger_size=(32,32), # size of the trigger for patch-based trigger
color=(0,0,0), # color of the patch-based trigger
position="right_bottom", # location for the patch-based trigger
checkpoint="ckpt/pytorch_model.bin", # path for the checkpoint
)
evaluation1.run_evaluation('covid-test') # dataset for evaluation
If you find our project to be useful, please cite our paper.
@inproceedings{jin2024backdoor,
title={Backdoor Attack on Unpaired Medical Image-Text Foundation Models: A Pilot Study on MedCLIP},
author={Jin, Ruinan and Huang, Chun-Yin and You, Chenyu and Li, Xiaoxiao},
booktitle={2024 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)},
pages={272--285},
year={2024},
organization={IEEE}
}
Our coding and design are referred to the following open source repositories. Thanks to the greate people and their amazing work. MedCLIP
If you have any question, feel free to submit issues using this repo (please submit follow the link's repo as that one is monitored by me) or email me. We are happy to help you.