Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models

March 01, 2020 ยท Entered Twilight ยท ๐Ÿ› International Conference on Artificial Intelligence and Statistics

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, LICENSE, README.md, build_generator_imagenet.py, build_generator_mnist.py, generative, requirement.txt, test_robustness_imagenet.py, test_robustness_mnist.py, train_classifier

Authors Xiao Zhang, Jinghui Chen, Quanquan Gu, David Evans arXiv ID 2003.00378 Category cs.LG: Machine Learning Cross-listed cs.CR, cs.CV, stat.ML Citations 17 Venue International Conference on Artificial Intelligence and Statistics Repository https://github.com/xiaozhanguva/Intrinsic-Rob โญ 3 Last Checked 1 month ago
Abstract
Starting with Gilmer et al. (2018), several works have demonstrated the inevitability of adversarial examples based on different assumptions about the underlying input probability space. It remains unclear, however, whether these results apply to natural image distributions. In this work, we assume the underlying data distribution is captured by some conditional generative model, and prove intrinsic robustness bounds for a general class of classifiers, which solves an open problem in Fawzi et al. (2018). Building upon the state-of-the-art conditional generative models, we study the intrinsic robustness of two common image benchmarks under $\ell_2$ perturbations, and show the existence of a large gap between the robustness limits implied by our theory and the adversarial robustness achieved by current state-of-the-art robust models. Code for all our experiments is available at https://github.com/xiaozhanguva/Intrinsic-Rob.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning