Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness

October 15, 2020 ยท Entered Twilight ยท ๐Ÿ› Neural Information Processing Systems

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: LICENSE, README.md, common, main_cifar.py, main_mnist.py, model_cifar.py, model_mnist.py, models, run_main_cifar10.sh, run_main_mnist.sh

Authors Long Zhao, Ting Liu, Xi Peng, Dimitris Metaxas arXiv ID 2010.08001 Category cs.LG: Machine Learning Cross-listed cs.CV Citations 186 Venue Neural Information Processing Systems Repository https://github.com/garyzhao/ME-ADA โญ 52 Last Checked 1 month ago
Abstract
Adversarial data augmentation has shown promise for training robust deep neural networks against unforeseen data shifts or corruptions. However, it is difficult to define heuristics to generate effective fictitious target distributions containing "hard" adversarial perturbations that are largely different from the source distribution. In this paper, we propose a novel and effective regularization term for adversarial data augmentation. We theoretically derive it from the information bottleneck principle, which results in a maximum-entropy formulation. Intuitively, this regularization term encourages perturbing the underlying source distribution to enlarge predictive uncertainty of the current model, so that the generated "hard" adversarial perturbations can improve the model robustness during training. Experimental results on three standard benchmarks demonstrate that our method consistently outperforms the existing state of the art by a statistically significant margin.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning