Continual Knowledge Distillation for Neural Machine Translation

December 18, 2022 ยท Entered Twilight ยท ๐Ÿ› Annual Meeting of the Association for Computational Linguistics

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: .gitignore, LICENSE, README.md, docs, eval_on_nist.sh, multi-bleu.perl, preprocess.sh, run.sh, run_mutual.sh, runnaive.sh, setup.py, subword-nmt, thumt, ๅœจ75wๆ•ฐๆฎไธŠ่ฎญ็ปƒๆจกๅž‹.sh

Authors Yuanchi Zhang, Peng Li, Maosong Sun, Yang Liu arXiv ID 2212.09097 Category cs.CL: Computation & Language Citations 7 Venue Annual Meeting of the Association for Computational Linguistics Repository https://github.com/THUNLP-MT/CKD โญ 1 Last Checked 1 month ago
Abstract
While many parallel corpora are not publicly accessible for data copyright, data privacy and competitive differentiation reasons, trained translation models are increasingly available on open platforms. In this work, we propose a method called continual knowledge distillation to take advantage of existing translation models to improve one model of interest. The basic idea is to sequentially transfer knowledge from each trained model to the distilled model. Extensive experiments on Chinese-English and German-English datasets show that our method achieves significant and consistent improvements over strong baselines under both homogeneous and heterogeneous trained model settings and is robust to malicious models.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computation & Language

๐ŸŒ… ๐ŸŒ… Old Age

Attention Is All You Need

Ashish Vaswani, Noam Shazeer, ... (+6 more)

cs.CL ๐Ÿ› NeurIPS ๐Ÿ“š 166.0K cites 8 years ago