Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks
December 24, 2018 Β· Entered Twilight Β· π IEEE International Conference on Computer Vision
"Last commit was 6.0 years ago (β₯5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: .gitignore, LICENSE, README.md, attacks, bench_settings.py, models, readme-img.png, requirements.txt, run_imagenet_bench.py, utils
Authors
Thomas Brunner, Frederik Diehl, Michael Truong Le, Alois Knoll
arXiv ID
1812.09803
Category
stat.ML: Machine Learning (Stat)
Cross-listed
cs.CR,
cs.CV,
cs.LG
Citations
125
Venue
IEEE International Conference on Computer Vision
Repository
https://github.com/ttbrunner/biased_boundary_attack
β 22
Last Checked
1 month ago
Abstract
We consider adversarial examples for image classification in the black-box decision-based setting. Here, an attacker cannot access confidence scores, but only the final label. Most attacks for this scenario are either unreliable or inefficient. Focusing on the latter, we show that a specific class of attacks, Boundary Attacks, can be reinterpreted as a biased sampling framework that gains efficiency from domain knowledge. We identify three such biases, image frequency, regional masks and surrogate gradients, and evaluate their performance against an ImageNet classifier. We show that the combination of these biases outperforms the state of the art by a wide margin. We also showcase an efficient way to attack the Google Cloud Vision API, where we craft convincing perturbations with just a few hundred queries. Finally, the methods we propose have also been found to work very well against strong defenses: Our targeted attack won second place in the NeurIPS 2018 Adversarial Vision Challenge.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Machine Learning (Stat)
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
Distilling the Knowledge in a Neural Network
R.I.P.
π»
Ghosted
Layer Normalization
R.I.P.
π»
Ghosted
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
R.I.P.
π»
Ghosted
Domain-Adversarial Training of Neural Networks
R.I.P.
π»
Ghosted