R.I.P.
๐ป
Ghosted
Adversarial Vulnerability of Randomized Ensembles
June 14, 2022 ยท Entered Twilight ยท ๐ International Conference on Machine Learning
Repo contents: README.md, architectures.py, archs, attack.py, datasets.py, eval_robustness_bat_sweep.py, eval_robustness_dverge_sweep.py, images, utils.py
Authors
Hassan Dbouk, Naresh R. Shanbhag
arXiv ID
2206.06737
Category
cs.LG: Machine Learning
Cross-listed
cs.CR,
cs.CV
Citations
7
Venue
International Conference on Machine Learning
Repository
https://github.com/hsndbk4/ARC
โญ 10
Last Checked
1 month ago
Abstract
Despite the tremendous success of deep neural networks across various tasks, their vulnerability to imperceptible adversarial perturbations has hindered their deployment in the real world. Recently, works on randomized ensembles have empirically demonstrated significant improvements in adversarial robustness over standard adversarially trained (AT) models with minimal computational overhead, making them a promising solution for safety-critical resource-constrained applications. However, this impressive performance raises the question: Are these robustness gains provided by randomized ensembles real? In this work we address this question both theoretically and empirically. We first establish theoretically that commonly employed robustness evaluation methods such as adaptive PGD provide a false sense of security in this setting. Subsequently, we propose a theoretically-sound and efficient adversarial attack algorithm (ARC) capable of compromising random ensembles even in cases where adaptive PGD fails to do so. We conduct comprehensive experiments across a variety of network architectures, training schemes, datasets, and norms to support our claims, and empirically establish that randomized ensembles are in fact more vulnerable to $\ell_p$-bounded adversarial perturbations than even standard AT models. Our code can be found at https://github.com/hsndbk4/ARC.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Machine Learning
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
R.I.P.
๐ป
Ghosted
Semi-Supervised Classification with Graph Convolutional Networks
R.I.P.
๐ป
Ghosted
Proximal Policy Optimization Algorithms
R.I.P.
๐ป
Ghosted