R.I.P.
๐ป
Ghosted
How to choose your best allies for a transferable attack?
April 05, 2023 ยท Entered Twilight ยท ๐ IEEE International Conference on Computer Vision
Repo contents: .gitignore, README.md, data, fit, main.py, requirements.txt, setup.py
Authors
Thibault Maho, Seyed-Mohsen Moosavi-Dezfooli, Teddy Furon
arXiv ID
2304.02312
Category
cs.CR: Cryptography & Security
Cross-listed
cs.AI
Citations
1
Venue
IEEE International Conference on Computer Vision
Repository
https://github.com/t-maho/transferability_measure_fit
โญ 2
Last Checked
1 month ago
Abstract
The transferability of adversarial examples is a key issue in the security of deep neural networks. The possibility of an adversarial example crafted for a source model fooling another targeted model makes the threat of adversarial attacks more realistic. Measuring transferability is a crucial problem, but the Attack Success Rate alone does not provide a sound evaluation. This paper proposes a new methodology for evaluating transferability by putting distortion in a central position. This new tool shows that transferable attacks may perform far worse than a black box attack if the attacker randomly picks the source model. To address this issue, we propose a new selection mechanism, called FiT, which aims at choosing the best source model with only a few preliminary queries to the target. Our experimental results show that FiT is highly effective at selecting the best source model for multiple scenarios such as single-model attacks, ensemble-model attacks and multiple attacks (Code available at: https://github.com/t-maho/transferability_measure_fit).
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Cryptography & Security
R.I.P.
๐ป
Ghosted
Membership Inference Attacks against Machine Learning Models
R.I.P.
๐ป
Ghosted
The Limitations of Deep Learning in Adversarial Settings
R.I.P.
๐ป
Ghosted
Practical Black-Box Attacks against Machine Learning
R.I.P.
๐ป
Ghosted
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
R.I.P.
๐ป
Ghosted