๐
๐
Old Age
Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts
September 26, 2022 ยท Entered Twilight ยท ๐ TL4NLP
Repo contents: .gitignore, DATA_README.md, README.md, data, data_downloaders, data_loaders.py, eval_bash.sh, figure1.png, promptsource, requirements.txt, run_configs.txt, score.py, t0, utils.py
Authors
Joel Jang, Seonghyeon Ye, Minjoon Seo
arXiv ID
2209.12711
Category
cs.CL: Computation & Language
Citations
78
Venue
TL4NLP
Repository
https://github.com/joeljang/negated-prompts-for-llms
โญ 24
Last Checked
1 month ago
Abstract
Previous work has shown that there exists a scaling law between the size of Language Models (LMs) and their zero-shot performance on different downstream NLP tasks. In this work, we show that this phenomenon does not hold when evaluating large LMs on tasks with negated prompts, but instead shows an inverse scaling law. We evaluate 9 different tasks with negated prompts on (1) pretrained LMs (OPT & GPT-3) of varying sizes (125M - 175B), (2) LMs further pretrained to generalize to novel prompts (InstructGPT), (3) LMs provided with few-shot examples, and (4) LMs fine-tuned specifically on negated prompts; all LM types perform worse on negated prompts as they scale and show a huge performance gap between the human performance when comparing the average score on both original and negated prompts. By highlighting a critical limitation of existing LMs and methods, we urge the community to develop new approaches of developing LMs that actually follow the given instructions. We provide the code and the datasets to explore negated prompts at https://github.com/joeljang/negated-prompts-for-llms
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computation & Language
๐
๐
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
๐ป
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
๐ป
Ghosted