RelaxLoss: Defending Membership Inference Attacks without Losing Utility

July 12, 2022 ยท Entered Twilight ยท ๐Ÿ› International Conference on Learning Representations

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: .gitignore, LICENSE, README.md, data, relaxloss.jpg, requirements.txt, source

Authors Dingfan Chen, Ning Yu, Mario Fritz arXiv ID 2207.05801 Category cs.LG: Machine Learning Cross-listed cs.CR Citations 56 Venue International Conference on Learning Representations Repository https://github.com/DingfanChen/RelaxLoss โญ 48 Last Checked 1 month ago
Abstract
As a long-term threat to the privacy of training data, membership inference attacks (MIAs) emerge ubiquitously in machine learning models. Existing works evidence strong connection between the distinguishability of the training and testing loss distributions and the model's vulnerability to MIAs. Motivated by existing results, we propose a novel training framework based on a relaxed loss with a more achievable learning target, which leads to narrowed generalization gap and reduced privacy leakage. RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead. Through extensive evaluations on five datasets with diverse modalities (images, medical data, transaction records), our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs as well as model utility. Our defense is the first that can withstand a wide range of attacks while preserving (or even improving) the target model's utility. Source code is available at https://github.com/DingfanChen/RelaxLoss
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning