Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models
September 23, 2019 ยท Entered Twilight ยท ๐ Conference on Computational Natural Language Learning
"Last commit was 5.0 years ago (โฅ5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: README.md, SupplementaryMaterials.pdf, scripts, stimuli, templates
Authors
Grusha Prasad, Marten van Schijndel, Tal Linzen
arXiv ID
1909.10579
Category
cs.CL: Computation & Language
Citations
56
Venue
Conference on Computational Natural Language Learning
Repository
https://github.com/grushaprasad/RNN-Priming
โญ 8
Last Checked
1 month ago
Abstract
Neural language models (LMs) perform well on tasks that require sensitivity to syntactic structure. Drawing on the syntactic priming paradigm from psycholinguistics, we propose a novel technique to analyze the representations that enable such success. By establishing a gradient similarity metric between structures, this technique allows us to reconstruct the organization of the LMs' syntactic representational space. We use this technique to demonstrate that LSTM LMs' representations of different types of sentences with relative clauses are organized hierarchically in a linguistically interpretable manner, suggesting that the LMs track abstract properties of the sentence.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computation & Language
๐
๐
Old Age
๐
๐
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
๐ป
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
๐ป
Ghosted