Representation Magnitude has a Liability to Privacy Vulnerability

July 23, 2024 ยท Declared Dead ยท ๐Ÿ› AAAI/ACM Conference on AI, Ethics, and Society

๐Ÿ’€ CAUSE OF DEATH: 404 Not Found
Code link is broken/dead
Authors Xingli Fang, Jung-Eun Kim arXiv ID 2407.16164 Category cs.LG: Machine Learning Cross-listed cs.AI, cs.CR, cs.CV Citations 2 Venue AAAI/ACM Conference on AI, Ethics, and Society Repository https://github.com/JEKimLab/AIES2024_SRCM} Last Checked 1 month ago
Abstract
The privacy-preserving approaches to machine learning (ML) models have made substantial progress in recent years. However, it is still opaque in which circumstances and conditions the model becomes privacy-vulnerable, leading to a challenge for ML models to maintain both performance and privacy. In this paper, we first explore the disparity between member and non-member data in the representation of models under common training frameworks. We identify how the representation magnitude disparity correlates with privacy vulnerability and address how this correlation impacts privacy vulnerability. Based on the observations, we propose Saturn Ring Classifier Module (SRCM), a plug-in model-level solution to mitigate membership privacy leakage. Through a confined yet effective representation space, our approach ameliorates models' privacy vulnerability while maintaining generalizability. The code of this work can be found here: \url{https://github.com/JEKimLab/AIES2024_SRCM}
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning

Died the same way โ€” ๐Ÿ’€ 404 Not Found