Distributionally Adversarial Attack

August 16, 2018 ยท Declared Dead ยท ๐Ÿ› AAAI Conference on Artificial Intelligence

๐Ÿ’€ CAUSE OF DEATH: 404 Not Found
Code link is broken/dead
Authors Tianhang Zheng, Changyou Chen, Kui Ren arXiv ID 1808.05537 Category cs.LG: Machine Learning Cross-listed cs.CR, stat.ML Citations 132 Venue AAAI Conference on Artificial Intelligence Repository https://github.com/tianzheng4/Distributionally-Adversarial-Attack} Last Checked 1 month ago
Abstract
Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD is robust against a wide range of first-order attacks. It is worth noting that the original objective of an attack/defense model relies on a data distribution $p(\mathbf{x})$, typically in the form of risk maximization/minimization, e.g., $\max/\min\mathbb{E}_{p(\mathbf(x))}\mathcal{L}(\mathbf{x})$ with $p(\mathbf{x})$ some unknown data distribution and $\mathcal{L}(\cdot)$ a loss function. However, since PGD generates attack samples independently for each data sample based on $\mathcal{L}(\cdot)$, the procedure does not necessarily lead to good generalization in terms of risk optimization. In this paper, we achieve the goal by proposing distributionally adversarial attack (DAA), a framework to solve an optimal {\em adversarial-data distribution}, a perturbed distribution that satisfies the $L_\infty$ constraint but deviates from the original data distribution to increase the generalization risk maximally. Algorithmically, DAA performs optimization on the space of potential data distributions, which introduces direct dependency between all data points when generating adversarial samples. DAA is evaluated by attacking state-of-the-art defense models, including the adversarially-trained models provided by {\em MIT MadryLab}. Notably, DAA ranks {\em the first place} on MadryLab's white-box leaderboards, reducing the accuracy of their secret MNIST model to $88.79\%$ (with $l_\infty$ perturbations of $ฮต= 0.3$) and the accuracy of their secret CIFAR model to $44.71\%$ (with $l_\infty$ perturbations of $ฮต= 8.0$). Code for the experiments is released on \url{https://github.com/tianzheng4/Distributionally-Adversarial-Attack}.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning

Died the same way โ€” ๐Ÿ’€ 404 Not Found