Improving Sequence Modeling Ability of Recurrent Neural Networks via Sememes

October 20, 2019 ยท Entered Twilight ยท ๐Ÿ› arXiv.org

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: LM, README.md, SentEval, adv_attack.py, attack_model.py, data.py, dataset, models.py, mutils.py, new_hownet.txt, new_sememe.txt, sememe.py, train_nli.py, transfer.py, wordnet.py

Authors Yujia Qin, Fanchao Qi, Sicong Ouyang, Zhiyuan Liu, Cheng Yang, Yasheng Wang, Qun Liu, Maosong Sun arXiv ID 1910.08910 Category cs.CL: Computation & Language Cross-listed cs.LG, eess.AS Citations 5 Venue arXiv.org Repository https://github.com/thunlp/SememeRNN โญ 6 Last Checked 2 months ago
Abstract
Sememes, the minimum semantic units of human languages, have been successfully utilized in various natural language processing applications. However, most existing studies exploit sememes in specific tasks and few efforts are made to utilize sememes more fundamentally. In this paper, we propose to incorporate sememes into recurrent neural networks (RNNs) to improve their sequence modeling ability, which is beneficial to all kinds of downstream tasks. We design three different sememe incorporation methods and employ them in typical RNNs including LSTM, GRU and their bidirectional variants. In evaluation, we use several benchmark datasets involving PTB and WikiText-2 for language modeling, SNLI for natural language inference and another two datasets for sentiment analysis and paraphrase detection. Experimental results show evident and consistent improvement of our sememe-incorporated models compared with vanilla RNNs, which proves the effectiveness of our sememe incorporation methods. Moreover, we find the sememe-incorporated models have higher robustness and outperform adversarial training in defending adversarial attack. All the code and data of this work can be obtained at https://github.com/thunlp/SememeRNN.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computation & Language

๐ŸŒ… ๐ŸŒ… Old Age

Attention Is All You Need

Ashish Vaswani, Noam Shazeer, ... (+6 more)

cs.CL ๐Ÿ› NeurIPS ๐Ÿ“š 166.0K cites 8 years ago