๐
๐
Old Age
Cognitive Dissonance: Why Do Language Model Outputs Disagree with Internal Representations of Truthfulness?
November 27, 2023 ยท Entered Twilight ยท ๐ Conference on Empirical Methods in Natural Language Processing
Repo contents: README.md, lm_truthfulness_gpt-j.ipynb, lm_truthfulness_gpt-j_sparse.ipynb, lm_truthfulness_gpt2-xl.ipynb
Authors
Kevin Liu, Stephen Casper, Dylan Hadfield-Menell, Jacob Andreas
arXiv ID
2312.03729
Category
cs.CL: Computation & Language
Cross-listed
cs.AI
Citations
54
Venue
Conference on Empirical Methods in Natural Language Processing
Repository
https://github.com/lingo-mit/lm-truthfulness
โญ 17
Last Checked
1 month ago
Abstract
Neural language models (LMs) can be used to evaluate the truth of factual statements in two ways: they can be either queried for statement probabilities, or probed for internal representations of truthfulness. Past work has found that these two procedures sometimes disagree, and that probes tend to be more accurate than LM outputs. This has led some researchers to conclude that LMs "lie" or otherwise encode non-cooperative communicative intents. Is this an accurate description of today's LMs, or can query-probe disagreement arise in other ways? We identify three different classes of disagreement, which we term confabulation, deception, and heterogeneity. In many cases, the superiority of probes is simply attributable to better calibration on uncertain answers rather than a greater fraction of correct, high-confidence answers. In some cases, queries and probes perform better on different subsets of inputs, and accuracy can further be improved by ensembling the two. Code is available at github.com/lingo-mit/lm-truthfulness.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computation & Language
๐
๐
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
๐ป
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
๐ป
Ghosted