Feature Denoising for Improving Adversarial Robustness

December 09, 2018 ยท Entered Twilight ยท ๐Ÿ› Computer Vision and Pattern Recognition

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 6.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .github, .gitignore, CODE_OF_CONDUCT.md, CONTRIBUTING.md, INSTRUCTIONS.md, LICENSE, README.md, adv_model.py, inference-example.py, main.py, nets.py, resnet_model.py, slurm, teaser.jpg, third_party, tox.ini

Authors Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan Yuille, Kaiming He arXiv ID 1812.03411 Category cs.CV: Computer Vision Citations 992 Venue Computer Vision and Pattern Recognition Repository https://github.com/facebookresearch/ImageNet-Adversarial-Training โญ 686 Last Checked 1 month ago
Abstract
Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9% accuracy, our method achieves 55.7%; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6% accuracy. Our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 --- it achieved 50.6% classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by ~10%. Code is available at https://github.com/facebookresearch/ImageNet-Adversarial-Training.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision