π
π
Old Age
Ignore Previous Prompt: Attack Techniques For Language Models
November 17, 2022 Β· Entered Twilight Β· π arXiv.org
Repo contents: .gitignore, CODE_OF_CONDUCT.md, CONTRIBUTING.md, LICENSE, README.md, images, notebooks, poetry.lock, promptinject, pyproject.toml, tests
Authors
FΓ‘bio Perez, Ian Ribeiro
arXiv ID
2211.09527
Category
cs.CL: Computation & Language
Cross-listed
cs.AI
Citations
669
Venue
arXiv.org
Repository
https://github.com/agencyenterprise/PromptInject
β 454
Last Checked
1 month ago
Abstract
Transformer-based large language models (LLMs) provide a powerful foundation for natural language tasks in large-scale customer-facing applications. However, studies that explore their vulnerabilities emerging from malicious user interaction are scarce. By proposing PromptInject, a prosaic alignment framework for mask-based iterative adversarial prompt composition, we examine how GPT-3, the most widely deployed language model in production, can be easily misaligned by simple handcrafted inputs. In particular, we investigate two types of attacks -- goal hijacking and prompt leaking -- and demonstrate that even low-aptitude, but sufficiently ill-intentioned agents, can easily exploit GPT-3's stochastic nature, creating long-tail risks. The code for PromptInject is available at https://github.com/agencyenterprise/PromptInject.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Computation & Language
π
π
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
π»
Ghosted
Language Models are Few-Shot Learners
R.I.P.
π»
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
π»
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
π»
Ghosted