Robustness of Rotation-Equivariant Networks to Adversarial Perturbations

February 19, 2018 ยท Entered Twilight ยท ๐Ÿ› arXiv.org

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 7.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, LICENSE, Makefile, README.rst, demo, docs, illustration-stadv-mnist.png, requirements.txt, setup.py, stadv, tests

Authors Beranger Dumont, Simona Maggio, Pablo Montalvo arXiv ID 1802.06627 Category cs.CV: Computer Vision Cross-listed cs.CR, cs.LG Citations 25 Venue arXiv.org Repository https://github.com/rakutentech/stAdv โญ 75 Last Checked 1 month ago
Abstract
Deep neural networks have been shown to be vulnerable to adversarial examples: very small perturbations of the input having a dramatic impact on the predictions. A wealth of adversarial attacks and distance metrics to quantify the similarity between natural and adversarial images have been proposed, recently enlarging the scope of adversarial examples with geometric transformations beyond pixel-wise attacks. In this context, we investigate the robustness to adversarial attacks of new Convolutional Neural Network architectures providing equivariance to rotations. We found that rotation-equivariant networks are significantly less vulnerable to geometric-based attacks than regular networks on the MNIST, CIFAR-10, and ImageNet datasets.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision