Retrieval-Augmented Generative Question Answering for Event Argument Extraction
November 14, 2022 ยท Entered Twilight ยท ๐ Conference on Empirical Methods in Natural Language Processing
Repo contents: .gitignore, README.md, aida_ontology_cleaned.csv, data_toy, event_role_ACE.json, event_role_ACEWIKI_q.json, event_role_ACE_q.json, event_role_KAIROS.json, event_role_WIKI_q.json, pronoun_list.txt, scripts, scripts_acewiki, scripts_fewshot, src, train.py
Authors
Xinya Du, Heng Ji
arXiv ID
2211.07067
Category
cs.CL: Computation & Language
Citations
52
Venue
Conference on Empirical Methods in Natural Language Processing
Repository
https://github.com/xinyadu/RGQA
โญ 17
Last Checked
1 month ago
Abstract
Event argument extraction has long been studied as a sequential prediction problem with extractive-based methods, tackling each argument in isolation. Although recent work proposes generation-based methods to capture cross-argument dependency, they require generating and post-processing a complicated target sequence (template). Motivated by these observations and recent pretrained language models' capabilities of learning from demonstrations. We propose a retrieval-augmented generative QA model (R-GQA) for event argument extraction. It retrieves the most similar QA pair and augments it as prompt to the current example's context, then decodes the arguments as answers. Our approach outperforms substantially prior methods across various settings (i.e. fully supervised, domain transfer, and fewshot learning). Finally, we propose a clustering-based sampling strategy (JointEnc) and conduct a thorough analysis of how different strategies influence the few-shot learning performance. The implementations are available at https:// github.com/xinyadu/RGQA
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computation & Language
๐
๐
Old Age
๐
๐
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
๐ป
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
๐ป
Ghosted