Privacy Backdoors: Stealing Data with Corrupted Pretrained Models

March 30, 2024 Β· Entered Twilight Β· πŸ› International Conference on Machine Learning

πŸ’€ TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: .gitattributes, .gitignore, README.md, analysis, materials, src, supple

Authors Shanglun Feng, Florian Tramèr arXiv ID 2404.00473 Category cs.CR: Cryptography & Security Cross-listed cs.LG Citations 30 Venue International Conference on Machine Learning Repository https://github.com/ShanglunFengatETHZ/PrivacyBackdoor ⭐ 50 Last Checked 1 month ago
Abstract
Practitioners commonly download pretrained machine learning models from open repositories and finetune them to fit specific applications. We show that this practice introduces a new risk of privacy backdoors. By tampering with a pretrained model's weights, an attacker can fully compromise the privacy of the finetuning data. We show how to build privacy backdoors for a variety of models, including transformers, which enable an attacker to reconstruct individual finetuning samples, with a guaranteed success! We further show that backdoored models allow for tight privacy attacks on models trained with differential privacy (DP). The common optimistic practice of training DP models with loose privacy guarantees is thus insecure if the model is not trusted. Overall, our work highlights a crucial and overlooked supply chain attack on machine learning privacy.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Cryptography & Security