Adversarial Examples: Attacks and Defenses for Deep Learning

December 19, 2017 ยท Declared Dead ยท ๐Ÿ› IEEE Transactions on Neural Networks and Learning Systems

๐Ÿฆด CAUSE OF DEATH: Skeleton Repo
Boilerplate only, no real code

Repo contents: readme.md

Authors Xiaoyong Yuan, Pan He, Qile Zhu, Xiaolin Li arXiv ID 1712.07107 Category cs.LG: Machine Learning Cross-listed cs.CR, cs.CV, stat.ML Citations 1.8K Venue IEEE Transactions on Neural Networks and Learning Systems Repository https://github.com/chbrian/awesome-adversarial-examples-dl โญ 265 Last Checked 1 month ago
Abstract
With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to well-designed input samples, called adversarial examples. Adversarial examples are imperceptible to human but can easily fool deep neural networks in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying deep neural networks in safety-critical environments. Therefore, attacks and defenses on adversarial examples draw great attention. In this paper, we review recent findings on adversarial examples for deep neural networks, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods. Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversarial examples and explore the challenges and the potential solutions.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning

Died the same way โ€” ๐Ÿฆด Skeleton Repo

R.I.P. ๐Ÿฆด Skeleton Repo

Neural Style Transfer: A Review

Yongcheng Jing, Yezhou Yang, ... (+4 more)

cs.CV ๐Ÿ› IEEE TVCG ๐Ÿ“š 828 cites 8 years ago