Client-specific Property Inference against Secure Aggregation in Federated Learning
March 07, 2023 · Declared Dead · 🏛 WPES@CCS
"Paper promises code 'coming soon'"
Evidence collected by the PWNC Scanner
Authors
Raouf Kerkouche, Gergely Ács, Mario Fritz
arXiv ID
2303.03908
Category
cs.CR: Cryptography & Security
Cross-listed
cs.LG
Citations
13
Venue
WPES@CCS
Last Checked
1 month ago
Abstract
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants with the help of a central server that coordinates the training. Although only the model parameters or other model updates are exchanged during the federated training instead of the participant's data, many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data. Although differential privacy is considered an effective solution to protect against privacy attacks, it is also criticized for its negative effect on utility. Another possible defense is to use secure aggregation which allows the server to only access the aggregated update instead of each individual one, and it is often more appealing because it does not degrade model quality. However, combining only the aggregated updates, which are generated by a different composition of clients in every round, may still allow the inference of some client-specific information. In this paper, we show that simple linear models can effectively capture client-specific properties only from the aggregated model updates due to the linearity of aggregation. We formulate an optimization problem across different rounds in order to infer a tested property of every client from the output of the linear models, for example, whether they have a specific sample in their training data (membership inference) or whether they misbehave and attempt to degrade the performance of the common model by poisoning attacks. Our reconstruction technique is completely passive and undetectable. We demonstrate the efficacy of our approach on several scenarios which shows that secure aggregation provides very limited privacy guarantees in practice. The source code will be released upon publication.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
📜 Similar Papers
In the same crypt — Cryptography & Security
R.I.P.
👻
Ghosted
R.I.P.
👻
Ghosted
Membership Inference Attacks against Machine Learning Models
R.I.P.
👻
Ghosted
The Limitations of Deep Learning in Adversarial Settings
R.I.P.
👻
Ghosted
Practical Black-Box Attacks against Machine Learning
R.I.P.
👻
Ghosted
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
R.I.P.
👻
Ghosted
Extracting Training Data from Large Language Models
Died the same way — ⏳ Coming Soon™
R.I.P.
⏳
Coming Soon™
Exploring Simple Siamese Representation Learning
R.I.P.
⏳
Coming Soon™
An Analysis of Scale Invariance in Object Detection - SNIP
R.I.P.
⏳
Coming Soon™
Class-balanced Grouping and Sampling for Point Cloud 3D Object Detection
R.I.P.
⏳
Coming Soon™