Skip to content

Official repository for "On Improving Adversarial Transferability of Vision Transformers" (ICLR 2022--Spotlight)

Notifications You must be signed in to change notification settings

Muzammal-Naseer/ATViT

Repository files navigation

On Improving Adversarial Transferability of Vision Transformers (ICLR'22--Spotlight)

Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Fahad Khan, Fatih Porikli

Paper (arxiv), Reviews & Response, Video Presentation, Poster

Abstract: Vision transformers (ViTs) process input images as sequences of patches via self-attention; a radically different architecture than convolutional neural networks(CNNs). This makes it interesting to study the adversarial feature space of ViT models and their transferability. In particular, we observe that adversarial patterns found via conventional adversarial attacks show very low black-box transferability even for large ViT models. However, we show that this phenomenon is only due to the sub-optimal attack procedures that do not leverage the true representation potential of ViTs. A deep ViT is composed of multiple blocks, with a consistent architecture comprising of self-attention and feed-forward layers, where each block is capable of independently producing a class token. Formulating an attack using only the last class token (conventional approach) does not directly leverage the discriminative information stored in the earlier tokens, leading to poor adversarial transferability of ViTs. Using the compositional nature of ViT models, we enhance transferability of existing attacks by introducing two novel strategies specific to the architecture of ViT models.(i) Self-Ensemble:We propose a method to find multiple discriminative pathways by dissecting a single ViT model into an ensemble of networks. This allows explicitly utilizing class-specific information at each ViT block.(ii) Token Refinement:We then propose to refine the tokens to further enhance the discriminative capacity at each block of ViT. Our token refinement systematically combines the class tokens with structural information preserved within the patch tokens. An adversarial attack when applied to such refined tokens within the ensemble of classifiers found in a single vision transformer has significantly higher transferability and thereby brings out the true generalization potential of the ViT’s adversarial space.

Updates & News

Citation

If you find our work, this repository, or pretrained transformers with refined tokens useful, please consider giving a star ⭐ and citation.

@inproceedings{
naseer2022on,
title={On Improving Adversarial Transferability of Vision Transformers },
author={Muzammal Naseer and Kanchana Ranasinghe and Salman Khan and Fahad Khan and Fatih Porikli},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=D6nH3719vZy}
}

Contents

  1. Contributions
  2. Quickstart
  3. Self-Ensemble
  4. Token Refinement Module (TRM)
  5. Training TRM

Contributions

  1. Self-ensemble: "Your Vision Transformer is secretly an ensemble of classifiers" We propose effective strategies to convert an off-the-shelf Vision Transformer into an ensemble of models (self-ensemble) by mapping intermediate class tokens to the final classifier. demo
  2. Token Refinement: We introduce a simple training approach to refine class and patch tokens for increase in discriminative abilities across these sub-models within a single Vision Transformer. trm
  3. Our approach paves the way to enhance the representation capacity of Vision Transformer's internal representations.
  4. Students and Researchers can explore the benefits of our approach in transfer learning or training a robust ViT. Practitioners can use our method to compress large ViTs with minimal drop in recognition accuracy.

Requirements

pip install -r requirements.txt

Quickstart

(top) To directly run demo transfer attacks using baseline, ensemble, and ensemble with TRM strategies, use following scripts. The path to the dataset must be updated.

./scripts/run_attack.sh

Dataset

We use a subset of the ImageNet validation set (5000 images) containing 5 random samples from each class that are correctly classified by both ResNet50 and ViT-small. This dataset is used for all experiments. This list of images is present in data/image_list.json. In following code, setting the path to the original ImageNet 2012 val set is sufficient; only the subset of images will be used for the evaluation.

Self-Ensemble Strategy

(top) Run transfer attack using our ensemble strategy as follows. DATA_DIR points to the root directory containing the validation images of ImageNet (original imagenet). We support attack types FGSM, PGD, MI-FGSM, DIM, and TI by default. Note that any other attack can be applied on ViT models using the self-ensemble strategy.

python test.py \
  --test_dir "$DATA_DIR" \
  --src_model deit_tiny_patch16_224 \
  --tar_model tnt_s_patch16_224  \
  --attack_type mifgsm \
  --eps 16 \
  --index "all" \
  --batch_size 128

For other model families, the pretrained models will have to be downloaded and the paths updated in the relevant files under vit_models.

Token Refinement Module (TRM)

(top) For self-ensemble attack with TRM, run the following. The same options are available for attack types and DATA_DIR must be set to point to the data directory.

python test.py \
  --test_dir "$DATA_DIR" \
  --src_model tiny_patch16_224_hierarchical \
  --tar_model tnt_s_patch16_224  \
  --attack_type mifgsm \
  --eps 16 \
  --index "all" \
  --batch_size 128

Pretrained Models with TRM Modules

Model Avg Acc Inc Pretrained
DeiT-T 12.43 Link
DeiT-S 15.21 Link
DeiT-B 16.70 Link

Average accuracy increase (Avg Acc Inc) refers to the improvement of discriminativity of each ViT block (measured by top-1 accuracy on ImageNet val set using each block output). The increase after adding TRM averaged across blocks is reported.

Training TRM

(top) For training the TRM module, use the following:

./scripts/train_trm.sh

Set the variables for experiment name (EXP_NAME) used for logging checkpoints and update DATA_PATH to point to the ImageNet 2012 root directory (containing /train and /val folders). We train using a single GPU. We initialize the weights using a pre-trained model and update only the TRM weights.

For using other models, replace the model name and the pretrained model path as below:

python -m torch.distributed.launch \
  --nproc_per_node=1 \
  --master_port="$RANDOM" \
  --use_env train_trm.py \
  --exp "$EXP_NAME" \
  --model "small_patch16_224_hierarchical" \
  --lr 0.01 \
  --batch-size 256 \
  --start-epoch 0 \
  --epochs 12 \
  --data "$DATA_PATH" \
  --pretrained "https://dl.fbaipublicfiles.com/deit/deit_small_patch16_224-cd65a155.pth" \
  --output_dir "checkpoints/$EXP_NAME"

Acknowledgements

(top) Code is based on DeiT repository and TIMM library. We thank the authors for releasing their codes.

About

Official repository for "On Improving Adversarial Transferability of Vision Transformers" (ICLR 2022--Spotlight)

Resources

Stars

Watchers

Forks

Packages

No packages published