Attacking the Spike: On the Transferability and Security of Spiking Neural Networks to Adversarial Examples
September 07, 2022 · Declared Dead · 🏛 Neurocomputing
"Paper promises code 'coming soon'"
Evidence collected by the PWNC Scanner
Authors
Nuo Xu, Kaleel Mahmood, Haowen Fang, Ethan Rathbun, Caiwen Ding, Wujie Wen
arXiv ID
2209.03358
Category
cs.NE: Neural & Evolutionary
Cross-listed
cs.AI,
cs.CR,
cs.CV,
cs.LG
Citations
16
Venue
Neurocomputing
Last Checked
1 month ago
Abstract
Spiking neural networks (SNNs) have drawn much attention for their high energy efficiency and recent advances in classification performance. However, unlike traditional deep learning, the robustness of SNNs to adversarial examples remains underexplored. This work advances the adversarial attack side of SNNs and makes three major contributions. First, we show that successful white-box attacks on SNNs strongly depend on the surrogate gradient estimation technique, even for adversarially trained models. Second, using the best single surrogate gradient estimator, we study the transferability of adversarial examples between SNNs and state-of-the-art architectures such as Vision Transformers (ViTs) and CNNs. Our analysis reveals two major gaps: no existing white-box attack leverages multiple surrogate estimators, and no single attack effectively fools both SNNs and non-SNN models simultaneously. Third, we propose the Mixed Dynamic Spiking Estimation (MDSE) attack, which dynamically combines multiple surrogate gradients to overcome these gaps. MDSE produces adversarial examples that fool both SNN and non-SNN models, achieving up to 91.4% higher effectiveness on SNN/ViT ensembles and a 3x boost on adversarially trained SNN ensembles over Auto-PGD. Experiments span three datasets (CIFAR-10, CIFAR-100, ImageNet) and nineteen classifiers, and we will release code and models upon publication.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
📜 Similar Papers
In the same crypt — Neural & Evolutionary
R.I.P.
👻
Ghosted
R.I.P.
👻
Ghosted
Progressive Growing of GANs for Improved Quality, Stability, and Variation
R.I.P.
👻
Ghosted
Learning both Weights and Connections for Efficient Neural Networks
R.I.P.
👻
Ghosted
LSTM: A Search Space Odyssey
R.I.P.
👻
Ghosted
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
R.I.P.
👻
Ghosted
An Introduction to Convolutional Neural Networks
Died the same way — ⏳ Coming Soon™
R.I.P.
⏳
Coming Soon™
Exploring Simple Siamese Representation Learning
R.I.P.
⏳
Coming Soon™
An Analysis of Scale Invariance in Object Detection - SNIP
R.I.P.
⏳
Coming Soon™
Class-balanced Grouping and Sampling for Point Cloud 3D Object Detection
R.I.P.
⏳
Coming Soon™