R.I.P.
๐ป
Ghosted
SocialGenPod: Privacy-Friendly Generative AI Social Web Applications with Decentralised Personal Data Stores
March 15, 2024 ยท Entered Twilight ยท ๐ The Web Conference
Repo contents: .github, .gitignore, LICENSE, README.md, example_docs, pyproject.toml, requirements-llm.txt, requirements.txt, src, startup.sh
Authors
Vidminas Vizgirda, Rui Zhao, Naman Goel
arXiv ID
2403.10408
Category
cs.CR: Cryptography & Security
Cross-listed
cs.CY,
cs.IR,
cs.LG,
cs.SI
Citations
2
Venue
The Web Conference
Repository
https://github.com/Vidminas/socialgenpod/
โญ 9
Last Checked
1 month ago
Abstract
We present SocialGenPod, a decentralised and privacy-friendly way of deploying generative AI Web applications. Unlike centralised Web and data architectures that keep user data tied to application and service providers, we show how one can use Solid -- a decentralised Web specification -- to decouple user data from generative AI applications. We demonstrate SocialGenPod using a prototype that allows users to converse with different Large Language Models, optionally leveraging Retrieval Augmented Generation to generate answers grounded in private documents stored in any Solid Pod that the user is allowed to access, directly or indirectly. SocialGenPod makes use of Solid access control mechanisms to give users full control of determining who has access to data stored in their Pods. SocialGenPod keeps all user data (chat history, app configuration, personal documents, etc) securely in the user's personal Pod; separate from specific model or application providers. Besides better privacy controls, this approach also enables portability across different services and applications. Finally, we discuss challenges, posed by the large compute requirements of state-of-the-art models, that future research in this area should address. Our prototype is open-source and available at: https://github.com/Vidminas/socialgenpod/.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Cryptography & Security
R.I.P.
๐ป
Ghosted
Membership Inference Attacks against Machine Learning Models
R.I.P.
๐ป
Ghosted
The Limitations of Deep Learning in Adversarial Settings
R.I.P.
๐ป
Ghosted
Practical Black-Box Attacks against Machine Learning
R.I.P.
๐ป
Ghosted
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
R.I.P.
๐ป
Ghosted