This repo is initial implementation for the paper "Undetectable Adversarial Examples based on Microscopical Regularization"
adversarial attack for 'best case' mode
python main_ut.py
adversarial attack for 'average case' mode
python main_at.py
You can download the dataset from:
https://drive.google.com/file/d/13N7Y2KeEaK4l-KwaDSw7PiK5S5-6T9EQ/view?usp=sharing