A pytorch implementation of "Towards Deep Learning Models Resistant to Adversarial Attacks"
This code is a pytorch implementation of PGD attack
In this code, I used above methods to fool Inception v3.
'Giant Panda' used for an example.
You can add other pictures with a folder with the label name in the 'data/imagenet'.
- python==3.6
- numpy==1.14.2
- pytorch==1.0.1
- Capacity(size of network) plays an important role in adversarial training. (p.9-10)
- For only natural examples training, it increases the robustness against one-step perturbations.
- For PGD adversarial training, small capacity networks fails.
- As capacity increases, the model can fit the adversairal examples increasingly well.
- More capacity and strong adversaries decrease transferability. (Section B)
- FGSM adversaries don't increase robustness for large epsilon(=8). (p.9-10)
- The network overfit to FGSM adversarial examples.
- Adversarial training with PGD shows good enough defense results.(p.12-13)
- This Repository won't be updated.
- Please check the package of adversarial attacks in pytorch