A Comprehensive Evaluation of Semantic Relation Knowledge of Pretrained Language Models and Humans
December 02, 2024 ยท Declared Dead ยท ๐ Language Resources and Evaluation
Repo contents: README.md, human_responses.json
Authors
Zhihan Cao, Hiroaki Yamada, Simone Teufel, Takenobu Tokunaga
arXiv ID
2412.01131
Category
cs.CL: Computation & Language
Citations
1
Venue
Language Resources and Evaluation
Repository
https://github.com/hancules/ProbeResponses
Last Checked
1 month ago
Abstract
Recently, much work has concerned itself with the enigma of what exactly pretrained language models~(PLMs) learn about different aspects of language, and how they learn it. One stream of this type of research investigates the knowledge that PLMs have about semantic relations. However, many aspects of semantic relations were left unexplored. Generally, only one relation has been considered, namely hypernymy. Furthermore, previous work did not measure humans' performance on the same task as that performed by the PLMs. This means that at this point in time, there is only an incomplete view of the extent of these models' semantic relation knowledge. To address this gap, we introduce a comprehensive evaluation framework covering five relations beyond hypernymy, namely hyponymy, holonymy, meronymy, antonymy, and synonymy. We use five metrics (two newly introduced here) for recently untreated aspects of semantic relation knowledge, namely soundness, completeness, symmetry, prototypicality, and distinguishability. Using these, we can fairly compare humans and models on the same task. Our extensive experiments involve six PLMs, four masked and two causal language models. The results reveal a significant knowledge gap between humans and models for all semantic relations. In general, causal language models, despite their wide use, do not always perform significantly better than masked language models. Antonymy is the outlier relation where all models perform reasonably well. The evaluation materials can be found at https://github.com/hancules/ProbeResponses.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computation & Language
๐
๐
Old Age
๐
๐
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
๐ป
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
๐ป
Ghosted
Deep contextualized word representations
Died the same way โ ๐ฆด Skeleton Repo
R.I.P.
๐ฆด
Skeleton Repo
EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification
R.I.P.
๐ฆด
Skeleton Repo
Deep Learning for 3D Point Clouds: A Survey
R.I.P.
๐ฆด
Skeleton Repo
Adversarial Examples: Attacks and Defenses for Deep Learning
R.I.P.
๐ฆด
Skeleton Repo