Adversarial Examples as an Input-Fault Tolerance Problem
November 30, 2018 ยท Entered Twilight ยท ๐ arXiv.org
"Last commit was 7.0 years ago (โฅ5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: .gitignore, LICENSE, README.md, adef_on_svhn_funcs.py, advex_input_fault_tolerance_nips18_secml.pdf, ckpt, cnn_model_pytorch.py, dataset, deformation.py, ft_plot.py, ft_utils.py, svhn_advex_fault_tolerance_pytorch.ipynb, svhn_generate_plots_for_paper.ipynb
Authors
Angus Galloway, Anna Golubeva, Graham W. Taylor
arXiv ID
1811.12601
Category
cs.LG: Machine Learning
Cross-listed
cs.CR,
stat.ML
Citations
0
Venue
arXiv.org
Repository
https://github.com/uoguelph-mlrg/nips18-secml-advex-input-fault
โญ 1
Last Checked
2 months ago
Abstract
We analyze the adversarial examples problem in terms of a model's fault tolerance with respect to its input. Whereas previous work focuses on arbitrarily strict threat models, i.e., $ฮต$-perturbations, we consider arbitrary valid inputs and propose an information-based characteristic for evaluating tolerance to diverse input faults.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Machine Learning
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
R.I.P.
๐ป
Ghosted
Semi-Supervised Classification with Graph Convolutional Networks
R.I.P.
๐ป
Ghosted
Proximal Policy Optimization Algorithms
R.I.P.
๐ป
Ghosted