Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data

March 01, 2019 ยท Entered Twilight ยท ๐Ÿ› North American Chapter of the Association for Computational Linguistics

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 6.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: CONTRIBUTING.md, LICENSE, PATENTS, README.md, README_FAIRSEQ.md, align.sh, arch.jpg, config.sh, dicts, docs, eval_lm.py, examples, fairseq.gif, fairseq, fairseq_cli, fairseq_logo.png, gec_scripts, generate.py, generate.sh, interactive.py, interactive.sh, noise.sh, noise_data.py, preprocess.py, preprocess.sh, preprocess_noise_data.sh, pretrain.sh, score.py, scripts, setup.py, tests, train.py, train.sh

Authors Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, Jingming Liu arXiv ID 1903.00138 Category cs.CL: Computation & Language Citations 227 Venue North American Chapter of the Association for Computational Linguistics Repository https://github.com/zhawe01/fairseq-gec โญ 251 Last Checked 1 month ago
Abstract
Neural machine translation systems have become state-of-the-art approaches for Grammatical Error Correction (GEC) task. In this paper, we propose a copy-augmented architecture for the GEC task by copying the unchanged words from the source sentence to the target sentence. Since the GEC suffers from not having enough labeled training data to achieve high accuracy. We pre-train the copy-augmented architecture with a denoising auto-encoder using the unlabeled One Billion Benchmark and make comparisons between the fully pre-trained model and a partially pre-trained model. It is the first time copying words from the source context and fully pre-training a sequence to sequence model are experimented on the GEC task. Moreover, We add token-level and sentence-level multi-task learning for the GEC task. The evaluation results on the CoNLL-2014 test set show that our approach outperforms all recently published state-of-the-art results by a large margin. The code and pre-trained models are released at https://github.com/zhawe01/fairseq-gec.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computation & Language

๐ŸŒ… ๐ŸŒ… Old Age

Attention Is All You Need

Ashish Vaswani, Noam Shazeer, ... (+6 more)

cs.CL ๐Ÿ› NeurIPS ๐Ÿ“š 166.0K cites 8 years ago