Improving Sequence Modeling Ability of Recurrent Neural Networks via Sememes
October 20, 2019 ยท Entered Twilight ยท ๐ arXiv.org
"Last commit was 5.0 years ago (โฅ5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: LM, README.md, SentEval, adv_attack.py, attack_model.py, data.py, dataset, models.py, mutils.py, new_hownet.txt, new_sememe.txt, sememe.py, train_nli.py, transfer.py, wordnet.py
Authors
Yujia Qin, Fanchao Qi, Sicong Ouyang, Zhiyuan Liu, Cheng Yang, Yasheng Wang, Qun Liu, Maosong Sun
arXiv ID
1910.08910
Category
cs.CL: Computation & Language
Cross-listed
cs.LG,
eess.AS
Citations
5
Venue
arXiv.org
Repository
https://github.com/thunlp/SememeRNN
โญ 6
Last Checked
2 months ago
Abstract
Sememes, the minimum semantic units of human languages, have been successfully utilized in various natural language processing applications. However, most existing studies exploit sememes in specific tasks and few efforts are made to utilize sememes more fundamentally. In this paper, we propose to incorporate sememes into recurrent neural networks (RNNs) to improve their sequence modeling ability, which is beneficial to all kinds of downstream tasks. We design three different sememe incorporation methods and employ them in typical RNNs including LSTM, GRU and their bidirectional variants. In evaluation, we use several benchmark datasets involving PTB and WikiText-2 for language modeling, SNLI for natural language inference and another two datasets for sentiment analysis and paraphrase detection. Experimental results show evident and consistent improvement of our sememe-incorporated models compared with vanilla RNNs, which proves the effectiveness of our sememe incorporation methods. Moreover, we find the sememe-incorporated models have higher robustness and outperform adversarial training in defending adversarial attack. All the code and data of this work can be obtained at https://github.com/thunlp/SememeRNN.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computation & Language
๐
๐
Old Age
๐
๐
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
๐ป
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
๐ป
Ghosted