Deep Encoder, Shallow Decoder: Reevaluating Non-autoregressive Machine Translation

June 18, 2020 ยท Entered Twilight ยท ๐Ÿ› International Conference on Learning Representations

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: CODE_OF_CONDUCT.md, CONTRIBUTING.md, LICENSE, README.md, docs, eval_lm.py, examples, fairseq.gif, fairseq, fairseq_cli, fairseq_logo.png, fb_sweep, generate.py, hubconf.py, interactive.py, pip-wheel-metadata, preprocess.py, pyproject.toml, score.py, scripts, setup.py, tests, train.py, validate.py

Authors Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, Noah A. Smith arXiv ID 2006.10369 Category cs.CL: Computation & Language Citations 151 Venue International Conference on Learning Representations Repository https://github.com/jungokasai/deep-shallow โญ 44 Last Checked 1 month ago
Abstract
Much recent effort has been invested in non-autoregressive neural machine translation, which appears to be an efficient alternative to state-of-the-art autoregressive machine translation on modern GPUs. In contrast to the latter, where generation is sequential, the former allows generation to be parallelized across target token positions. Some of the latest non-autoregressive models have achieved impressive translation quality-speed tradeoffs compared to autoregressive baselines. In this work, we reexamine this tradeoff and argue that autoregressive baselines can be substantially sped up without loss in accuracy. Specifically, we study autoregressive models with encoders and decoders of varied depths. Our extensive experiments show that given a sufficiently deep encoder, a single-layer autoregressive decoder can substantially outperform strong non-autoregressive models with comparable inference speed. We show that the speed disadvantage for autoregressive baselines compared to non-autoregressive methods has been overestimated in three aspects: suboptimal layer allocation, insufficient speed measurement, and lack of knowledge distillation. Our results establish a new protocol for future research toward fast, accurate machine translation. Our code is available at https://github.com/jungokasai/deep-shallow.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computation & Language

๐ŸŒ… ๐ŸŒ… Old Age

Attention Is All You Need

Ashish Vaswani, Noam Shazeer, ... (+6 more)

cs.CL ๐Ÿ› NeurIPS ๐Ÿ“š 166.0K cites 8 years ago