How to Profile Privacy-Conscious Users in Recommender Systems
December 01, 2018 ยท Entered Twilight ยท ๐ arXiv.org
"No code URL or promise found in abstract"
"Derived repo from GitHub Pages (backfill)"
Evidence collected by the PWNC Scanner
Repo contents: .gitignore, .gitmodules, cfp-ppml20.txt, css, img, index.html, js, ppml18, ppml19
Authors
Fabrice Benhamouda, Marc Joye
arXiv ID
1812.00125
Category
cs.CR: Cryptography & Security
Cross-listed
cs.LG
Citations
0
Venue
arXiv.org
Repository
https://github.com/ppml-workshop/ppml
Last Checked
1 month ago
Abstract
Matrix factorization is a popular method to build a recommender system. In such a system, existing users and items are associated to a low-dimension vector called a profile. The profiles of a user and of an item can be combined (via inner product) to predict the rating that the user would get on the item. One important issue of such a system is the so-called cold-start problem: how to allow a user to learn her profile, so that she can then get accurate recommendations? While a profile can be computed if the user is willing to rate well-chosen items and/or provide supplemental attributes or demographics (such as gender), revealing this additional information is known to allow the analyst of the recommender system to infer many more personal sensitive information. We design a protocol to allow privacy-conscious users to benefit from matrix-factorization-based recommender systems while preserving their privacy. More precisely, our protocol enables a user to learn her profile, and from that to predict ratings without the user revealing any personal information. The protocol is secure in the standard model against semi-honest adversaries.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Cryptography & Security
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
Membership Inference Attacks against Machine Learning Models
R.I.P.
๐ป
Ghosted
The Limitations of Deep Learning in Adversarial Settings
R.I.P.
๐ป
Ghosted
Practical Black-Box Attacks against Machine Learning
R.I.P.
๐ป
Ghosted
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
R.I.P.
๐ป
Ghosted