Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts

September 26, 2022 ยท Entered Twilight ยท ๐Ÿ› TL4NLP

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: .gitignore, DATA_README.md, README.md, data, data_downloaders, data_loaders.py, eval_bash.sh, figure1.png, promptsource, requirements.txt, run_configs.txt, score.py, t0, utils.py

Authors Joel Jang, Seonghyeon Ye, Minjoon Seo arXiv ID 2209.12711 Category cs.CL: Computation & Language Citations 78 Venue TL4NLP Repository https://github.com/joeljang/negated-prompts-for-llms โญ 24 Last Checked 1 month ago
Abstract
Previous work has shown that there exists a scaling law between the size of Language Models (LMs) and their zero-shot performance on different downstream NLP tasks. In this work, we show that this phenomenon does not hold when evaluating large LMs on tasks with negated prompts, but instead shows an inverse scaling law. We evaluate 9 different tasks with negated prompts on (1) pretrained LMs (OPT & GPT-3) of varying sizes (125M - 175B), (2) LMs further pretrained to generalize to novel prompts (InstructGPT), (3) LMs provided with few-shot examples, and (4) LMs fine-tuned specifically on negated prompts; all LM types perform worse on negated prompts as they scale and show a huge performance gap between the human performance when comparing the average score on both original and negated prompts. By highlighting a critical limitation of existing LMs and methods, we urge the community to develop new approaches of developing LMs that actually follow the given instructions. We provide the code and the datasets to explore negated prompts at https://github.com/joeljang/negated-prompts-for-llms
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computation & Language

๐ŸŒ… ๐ŸŒ… Old Age

Attention Is All You Need

Ashish Vaswani, Noam Shazeer, ... (+6 more)

cs.CL ๐Ÿ› NeurIPS ๐Ÿ“š 166.0K cites 8 years ago