This repository contains the official PyTorch implementation of the following paper to appear at IEEE Security and Privacy 2022:
SoK: How Robust is Deep Neural Network Image Classification Watermarking?
Nils Lukas, Edward Jiang, Xinda Li, Florian Kerschbaum
https://arxiv.org/abs/2108.04974Abstract: Deep Neural Network (DNN) watermarking is a method for provenance verification of DNN models. Watermarking should be robust against watermark removal attacks that derive a surrogate model that evades provenance verification. Many watermarking schemes that claim robustness have been proposed, but their robustness is only validated in isolation against a relatively small set of attacks. There is no systematic, empirical evaluation of these claims against a common, comprehensive set of removal attacks. This uncertainty about a watermarking scheme's robustness causes difficulty to trust their deployment in practice. In this paper, we evaluate whether recently proposed watermarking schemes that claim robustness are robust against a large set of removal attacks. We survey methods from the literature that (i) are known removal attacks, (ii) derive surrogate models but have not been evaluated as removal attacks, and (iii) novel removal attacks. Weight shifting, transfer learning and smooth retraining are novel removal attacks adapted to the DNN watermarking schemes surveyed in this paper. We propose taxonomies for watermarking schemes and removal attacks. Our empirical evaluation includes an ablation study over sets of parameters for each attack and watermarking scheme on the image classification datasets CIFAR-10 and ImageNet. Surprisingly, our study shows that none of the surveyed watermarking schemes is robust in practice. We find that schemes fail to withstand adaptive attacks and known methods for deriving surrogate models that have not been evaluated as removal attacks. This points to intrinsic flaws in how robustness is currently evaluated. Our evaluation includes a discussion of the runtime of each attack to underpin their practical relevance. While none of the schemes is robust against all attacks, none of the attacks removes all watermarks. We show that attacks can be combined and find combined attacks that remove all watermarks. We show that watermarking schemes need to be evaluated against a more extensive set of removal attacks with a more realistic adversary model. Our source code and a complete dataset of evaluation results will be made publicly available, which allows to independently verify our conclusions.
All watermarking schemes and removal attacks are configured for the image classification datasets CIFAR-10 (32x32 pixels, 10 classes) and ImageNet (224x224 pixels, 1k classes). We implemented the following watermarking schemes, sorted by their categories:
- Model Independent: Adi, Content, Noise, Unrelated
- Model Dependent: Jia, Frontier Stitching, Blackmarks
- Parameter Encoding: Uchida , DeepSigns, DeepMarks
- Active: DAWN
.. and the following removal attacks, sorted by their categories:
-
Input Preprocessing: Input Reconstruction, JPEG Compression, Input Quantization, Input Smoothing, Input Noising, Input Flipping, Feature Squeezing
-
Model Modification: Adversarial Training, Fine-Tuning (FTLL, FTAL, RTAL, RTLL), Weight Quantization, Label Smoothing, Fine Pruning, Feature Permutation, Weight Pruning, Weight Shifting, Neural Cleanse, Regularization, Neural Laundering, Overwriting
-
Model Extraction: Knockoff Nets (Random Selection), Distillation, Transfer Learning, Retraining, Smooth Retraining, Cross-Architecture Retraining
At this point, the Watermark-Robustness-Toolbox project is not available as a standalone pip package, but we are working on allowing an installation via pip. We describe a manual installation and usage. First, install all dependencies via pip.
$ pip install -r requirements.txt
The following four main scripts provide the entire toolbox's functionality:
- train.py: Pre-trains an unmarked neural network.
- embed.py: Embeds a watermark into a pre-trained neural network.
- steal.py: Performs a removal attack against a watermarked neural network.
- decision_threshold.py: Computes the decision threshold for a watermarking scheme.
We use the mlconfig library to pass configuration hyperparameters to each script.
Configuration files used in our paper for CIFAR-10 and ImageNet can be found in the configs/
directory.
Configuration files store all hyperparameters needed to reproduce an experiment.
$ python train.py --config configs/cifar10/train_configs/resnet.yaml
This creates an outputs
directory and saves a model file at outputs/cifar10/null_models/resnet/
.
$ python embed.py --wm_config configs/cifar10/wm_configs/adi.yaml \
--filename outputs/cifar10/null_models/resnet/best.pth
This embeds an Adi watermark into the pre-trained model from 'Example 1' and saves (i) the watermarked model and
(ii) all data to read the watermark under outputs/cifar10/wm/adi/00000_adi/
.
$ python steal.py --attack_config configs/cifar10/attack_configs/ftal.yaml \
--wm_dir outputs/cifar10/wm/adi/00000_adi/
This runs the Fine-Tuning (FTAL) removal attack against the watermarked model and creates a surrogate model stored under
outputs/cifar10/attacks/ftal/
. The directory also contains human-readable debug files, such as the surrogate model's watermark and
test accuracies.
Our toolbox currently implements custom data loaders (class WRTDataLoader) for the following datasets.
- CIFAR-10
- ImageNet (needs manual download)
- Omniglot (needs manual download)
- Open Images (needs manual download)
We are actively working on documenting the parameters of each watermarking scheme and removal attack.
At this point, we can only refer to the method's source code (at wrt/defenses/
and wrt/attacks/
).
Soon we will host a complete documentation of all parameters, so stay tuned!
We encourage authors of watermarking schemes or removal attacks to implement their methods in the Watermark-Robustness-Toolbox to make them publicly accessible in a unified framework. Our aim is to improve reproducibility which makes it easier to evaluate a scheme's robustness. Any contributions or suggestions for improvements are welcome and greatly appreciated. This toolbox is maintained as part of a university project by graduate students.
The codebase has been based off an early version of the Adversarial-Robustness-Tooblox.
@InProceedings{lukas2022watermarkingsok,
title={SoK: How Robust is Deep Neural Network Image Classification Watermarking?},
author={Lukas, Nils and Jiang, Edward and Li, Xinda and Kerschbaum, Florian},
year={2022},
booktitle={IEEE Symposium on Security and Privacy}
}