Understanding the Impact of Adversarial Robustness on Accuracy Disparity

November 28, 2022 ยท Entered Twilight ยท ๐Ÿ› International Conference on Machine Learning

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: README.md, data_cauchy_2, data_cifar, data_fmnist, data_levy_1.5, data_mnist, data_syn, process_data.py, test_cifar.py, test_cifar_vgg.py, test_fmnist.py, test_mnist.py, test_syn.py, train_cifar.py, train_cifar_vgg.py, train_fmnist.py, train_mnist.py, train_syn.py

Authors Yuzheng Hu, Fan Wu, Hongyang Zhang, Han Zhao arXiv ID 2211.15762 Category cs.LG: Machine Learning Cross-listed stat.ML Citations 11 Venue International Conference on Machine Learning Repository https://github.com/Accuracy-Disparity/AT-on-AD โญ 2 Last Checked 1 month ago
Abstract
While it has long been empirically observed that adversarial robustness may be at odds with standard accuracy and may have further disparate impacts on different classes, it remains an open question to what extent such observations hold and how the class imbalance plays a role within. In this paper, we attempt to understand this question of accuracy disparity by taking a closer look at linear classifiers under a Gaussian mixture model. We decompose the impact of adversarial robustness into two parts: an inherent effect that will degrade the standard accuracy on all classes due to the robustness constraint, and the other caused by the class imbalance ratio, which will increase the accuracy disparity compared to standard training. Furthermore, we also show that such effects extend beyond the Gaussian mixture model, by generalizing our data model to the general family of stable distributions. More specifically, we demonstrate that while the constraint of adversarial robustness consistently degrades the standard accuracy in the balanced class setting, the class imbalance ratio plays a fundamentally different role in accuracy disparity compared to the Gaussian case, due to the heavy tail of the stable distribution. We additionally perform experiments on both synthetic and real-world datasets to corroborate our theoretical findings. Our empirical results also suggest that the implications may extend to nonlinear models over real-world datasets. Our code is publicly available on GitHub at https://github.com/Accuracy-Disparity/AT-on-AD.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning