Adversarial Examples as an Input-Fault Tolerance Problem

November 30, 2018 ยท Entered Twilight ยท ๐Ÿ› arXiv.org

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 7.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, LICENSE, README.md, adef_on_svhn_funcs.py, advex_input_fault_tolerance_nips18_secml.pdf, ckpt, cnn_model_pytorch.py, dataset, deformation.py, ft_plot.py, ft_utils.py, svhn_advex_fault_tolerance_pytorch.ipynb, svhn_generate_plots_for_paper.ipynb

Authors Angus Galloway, Anna Golubeva, Graham W. Taylor arXiv ID 1811.12601 Category cs.LG: Machine Learning Cross-listed cs.CR, stat.ML Citations 0 Venue arXiv.org Repository https://github.com/uoguelph-mlrg/nips18-secml-advex-input-fault โญ 1 Last Checked 2 months ago
Abstract
We analyze the adversarial examples problem in terms of a model's fault tolerance with respect to its input. Whereas previous work focuses on arbitrarily strict threat models, i.e., $ฮต$-perturbations, we consider arbitrary valid inputs and propose an information-based characteristic for evaluating tolerance to diverse input faults.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning