Beyond Traditional Threats: A Persistent Backdoor Attack on Federated Learning
April 26, 2024 ยท Entered Twilight ยท ๐ AAAI Conference on Artificial Intelligence
Repo contents: FCBA-visio-show.jpg, README.md, config.py, helper.py, image_helper.py, image_train.py, main.py, models, saved_models, test.py, train.py, utils
Authors
Tao Liu, Yuhang Zhang, Zhu Feng, Zhiqin Yang, Chen Xu, Dapeng Man, Wu Yang
arXiv ID
2404.17617
Category
cs.CR: Cryptography & Security
Cross-listed
cs.AI,
cs.CV,
cs.LG
Citations
35
Venue
AAAI Conference on Artificial Intelligence
Repository
https://github.com/PhD-TaoLiu/FCBA
โญ 21
Last Checked
1 month ago
Abstract
Backdoors on federated learning will be diluted by subsequent benign updates. This is reflected in the significant reduction of attack success rate as iterations increase, ultimately failing. We use a new metric to quantify the degree of this weakened backdoor effect, called attack persistence. Given that research to improve this performance has not been widely noted,we propose a Full Combination Backdoor Attack (FCBA) method. It aggregates more combined trigger information for a more complete backdoor pattern in the global model. Trained backdoored global model is more resilient to benign updates, leading to a higher attack success rate on the test set. We test on three datasets and evaluate with two models across various settings. FCBA's persistence outperforms SOTA federated learning backdoor attacks. On GTSRB, postattack 120 rounds, our attack success rate rose over 50% from baseline. The core code of our method is available at https://github.com/PhD-TaoLiu/FCBA.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Cryptography & Security
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
Membership Inference Attacks against Machine Learning Models
R.I.P.
๐ป
Ghosted
The Limitations of Deep Learning in Adversarial Settings
R.I.P.
๐ป
Ghosted
Practical Black-Box Attacks against Machine Learning
R.I.P.
๐ป
Ghosted
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
R.I.P.
๐ป
Ghosted