Improving Complex Knowledge Base Question Answering via Question-to-Action and Question-to-Question Alignment
December 26, 2022 ยท Entered Twilight ยท ๐ Conference on Empirical Methods in Natural Language Processing
Repo contents: BFS, README.md, action2text.py, calculate_sample_test_dataset.py, data, model.py, predict_question_rewrite.py, predict_with_beam_search.py, question_decompose.py, requirements.txt, symbolics.py, train.py, train_question_rewrite.py, train_util.py, transform_util.py, utils.py
Authors
Yechun Tang, Xiaoxia Cheng, Weiming Lu
arXiv ID
2212.13036
Category
cs.CL: Computation & Language
Cross-listed
cs.AI
Citations
11
Venue
Conference on Empirical Methods in Natural Language Processing
Repository
https://github.com/TTTTTTTTy/ALCQA
โญ 7
Last Checked
1 month ago
Abstract
Complex knowledge base question answering can be achieved by converting questions into sequences of predefined actions. However, there is a significant semantic and structural gap between natural language and action sequences, which makes this conversion difficult. In this paper, we introduce an alignment-enhanced complex question answering framework, called ALCQA, which mitigates this gap through question-to-action alignment and question-to-question alignment. We train a question rewriting model to align the question and each action, and utilize a pretrained language model to implicitly align the question and KG artifacts. Moreover, considering that similar questions correspond to similar action sequences, we retrieve top-k similar question-answer pairs at the inference stage through question-to-question alignment and propose a novel reward-guided action sequence selection strategy to select from candidate action sequences. We conduct experiments on CQA and WQSP datasets, and the results show that our approach outperforms state-of-the-art methods and obtains a 9.88\% improvements in the F1 metric on CQA dataset. Our source code is available at https://github.com/TTTTTTTTy/ALCQA.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computation & Language
๐
๐
Old Age
๐
๐
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
๐ป
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
๐ป
Ghosted