Ignore Previous Prompt: Attack Techniques For Language Models

November 17, 2022 Β· Entered Twilight Β· πŸ› arXiv.org

πŸ’€ TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: .gitignore, CODE_OF_CONDUCT.md, CONTRIBUTING.md, LICENSE, README.md, images, notebooks, poetry.lock, promptinject, pyproject.toml, tests

Authors FÑbio Perez, Ian Ribeiro arXiv ID 2211.09527 Category cs.CL: Computation & Language Cross-listed cs.AI Citations 669 Venue arXiv.org Repository https://github.com/agencyenterprise/PromptInject ⭐ 454 Last Checked 1 month ago
Abstract
Transformer-based large language models (LLMs) provide a powerful foundation for natural language tasks in large-scale customer-facing applications. However, studies that explore their vulnerabilities emerging from malicious user interaction are scarce. By proposing PromptInject, a prosaic alignment framework for mask-based iterative adversarial prompt composition, we examine how GPT-3, the most widely deployed language model in production, can be easily misaligned by simple handcrafted inputs. In particular, we investigate two types of attacks -- goal hijacking and prompt leaking -- and demonstrate that even low-aptitude, but sufficiently ill-intentioned agents, can easily exploit GPT-3's stochastic nature, creating long-tail risks. The code for PromptInject is available at https://github.com/agencyenterprise/PromptInject.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Computation & Language

πŸŒ… πŸŒ… Old Age

Attention Is All You Need

Ashish Vaswani, Noam Shazeer, ... (+6 more)

cs.CL πŸ› NeurIPS πŸ“š 166.0K cites 8 years ago