This repository is the official implementation of Rethinking Image-Scaling Attacks: The Interplay Between Vulnerabilities in Machine Learning Systems.
[Paper] [Recorded Talk] [Slides]
To setup environment:
conda create -n scaling python=3.10
conda activate scaling
conda install pytorch torchvision cudatoolkit=11.3 -c pytorch
pip install -r requirements.txt
To prepare ImageNet:
- Download the validation set from https://www.image-net.org
- Extract to
./static/datasets/imagenet/
To prepare CelebA:
- Download the test set from https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html
- Extract to
./static/datasets/celeba/
To prepare models for ImageNet:
- The natural model will be downloaded automatically by Torch Vision.
- (Optionally) Download the robust ResNet-50 model
imagenet_l2_3_0.pt
from GitHub Repo. - Save to
./static/models/
To prepare models for CelebA:
- Download the pre-trained ResNet-34 model from Google Drive.
- Save to
./static/models/
To select ImageNet images larger than 672*672 that are correctly classified:
python -m scripts.select_images -d imagenet
To select CelebA images that are correctly classified:
python -m scripts.select_images -d celeba
To preview all arguments:
python -m scripts.attack_blackbox --help
To run HSJ attack (LR) on ImageNet:
python -m scripts.attack_blackbox \
--id 0 --dataset imagenet --model imagenet \
--scale 1 --defense none \
--attack hsj --query 25000 \
--output static/logs --tag demo \
--gpu 0
To run HSJ attack (HR) on ImageNet with median filtering defense:
python -m scripts.attack_blackbox \
--id 0 --dataset imagenet --model imagenet \
--scale 3 --defense median \
--attack hsj --query 25000 \
--output static/logs --tag demo \
--gpu 0
To run HSJ attack (HR) on CelebA with no defense:
python -m scripts.attack_blackbox \
--id 0 --dataset celeba --model celeba \
--scale 3 --defense none \
--attack hsj --query 25000 \
--output static/logs --tag demo \
--gpu 0
To run HSJ attack (HR) on Cloud API:
Note: You need to set TENCENT_ID
and TENCENT_KEY
as environment variables to access the API.
python -m scripts.attack_blackbox \
--id 0 --dataset imagenet --model api \
--scale 3 --defense none \
--attack hsj --query 3000 \
--output static/logs --tag demo \
--gpu 0
To run ablation study, use the following flags:
- No SNS
--tag bad_noise --no-smart-noise
- No improved median
--tag bad_noise -no-smart-median
- No efficient SNS
--tag eq1 --precise-noise
If you find this work useful in your research, please cite our paper with the following BibTeX:
@inproceedings{gao2022rethinking,
author = {Yue Gao and Ilia Shumailov and Kassem Fawaz},
editor = {Kamalika Chaudhuri and Stefanie Jegelka and Le Song and Csaba Szepesv{\'{a}}ri and Gang Niu and Sivan Sabato},
title = {Rethinking Image-Scaling Attacks: The Interplay Between Vulnerabilities in Machine Learning Systems},
booktitle = {International Conference on Machine Learning, {ICML} 2022, 17-23 July 2022, Baltimore, Maryland, {USA}},
series = {Proceedings of Machine Learning Research},
volume = {162},
pages = {7102--7121},
publisher = {{PMLR}},
year = {2022},
url = {https://proceedings.mlr.press/v162/gao22g.html},
biburl = {https://dblp.org/rec/conf/icml/GaoSF22.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
- Pretrained Robust Models
- Previous Image-Scaling Attack's Implementation