Skip to content

A pytorch implementation of "Towards Deep Learning Models Resistant to Adversarial Attacks"

License

Notifications You must be signed in to change notification settings

ChenSk12/PGD-pytorch-1

 
 

Repository files navigation

PGD-pytorch

A pytorch implementation of "Towards Deep Learning Models Resistant to Adversarial Attacks"

Summary

This code is a pytorch implementation of PGD attack
In this code, I used above methods to fool Inception v3.
'Giant Panda' used for an example.
You can add other pictures with a folder with the label name in the 'data/imagenet'.

Requirements

  • python==3.6
  • numpy==1.14.2
  • pytorch==1.0.1

Important results not in the code

  • Capacity(size of network) plays an important role in adversarial training. (p.9-10)
    • For only natural examples training, it increases the robustness against one-step perturbations.
    • For PGD adversarial training, small capacity networks fails.
    • As capacity increases, the model can fit the adversairal examples increasingly well.
    • More capacity and strong adversaries decrease transferability. (Section B)
  • FGSM adversaries don't increase robustness for large epsilon(=8). (p.9-10)
    • The network overfit to FGSM adversarial examples.
  • Adversarial training with PGD shows good enough defense results.(p.12-13)

Notice

About

A pytorch implementation of "Towards Deep Learning Models Resistant to Adversarial Attacks"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 86.3%
  • Python 13.7%