Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks

June 08, 2019 ยท Entered Twilight ยท ๐Ÿ› Neural Information Processing Systems

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .dockerignore, .gitattributes, .gitignore, Dockerfile, LICENSE, README.md, attacks.py, classifiers.py, data.py, data, eval.py, exps.sh, images, notebooks, robust_boosting.py, robustml, stump_ensemble.py, train.py, tree_ensemble.py, utils.py

Authors Maksym Andriushchenko, Matthias Hein arXiv ID 1906.03526 Category cs.LG: Machine Learning Cross-listed cs.CR, stat.ML Citations 67 Venue Neural Information Processing Systems Repository https://github.com/max-andr/provably-robust-boosting โญ 50 Last Checked 1 month ago
Abstract
The problem of adversarial robustness has been studied extensively for neural networks. However, for boosted decision trees and decision stumps there are almost no results, even though they are widely used in practice (e.g. XGBoost) due to their accuracy, interpretability, and efficiency. We show in this paper that for boosted decision stumps the \textit{exact} min-max robust loss and test error for an $l_\infty$-attack can be computed in $O(T\log T)$ time per input, where $T$ is the number of decision stumps and the optimal update step of the ensemble can be done in $O(n^2\,T\log T)$, where $n$ is the number of data points. For boosted trees we show how to efficiently calculate and optimize an upper bound on the robust loss, which leads to state-of-the-art robust test error for boosted trees on MNIST (12.5% for $ฮต_\infty=0.3$), FMNIST (23.2% for $ฮต_\infty=0.1$), and CIFAR-10 (74.7% for $ฮต_\infty=8/255$). Moreover, the robust test error rates we achieve are competitive to the ones of provably robust convolutional networks. The code of all our experiments is available at http://github.com/max-andr/provably-robust-boosting
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning