Continual Knowledge Distillation for Neural Machine Translation
December 18, 2022 ยท Entered Twilight ยท ๐ Annual Meeting of the Association for Computational Linguistics
Repo contents: .gitignore, LICENSE, README.md, docs, eval_on_nist.sh, multi-bleu.perl, preprocess.sh, run.sh, run_mutual.sh, runnaive.sh, setup.py, subword-nmt, thumt, ๅจ75wๆฐๆฎไธ่ฎญ็ปๆจกๅ.sh
Authors
Yuanchi Zhang, Peng Li, Maosong Sun, Yang Liu
arXiv ID
2212.09097
Category
cs.CL: Computation & Language
Citations
7
Venue
Annual Meeting of the Association for Computational Linguistics
Repository
https://github.com/THUNLP-MT/CKD
โญ 1
Last Checked
1 month ago
Abstract
While many parallel corpora are not publicly accessible for data copyright, data privacy and competitive differentiation reasons, trained translation models are increasingly available on open platforms. In this work, we propose a method called continual knowledge distillation to take advantage of existing translation models to improve one model of interest. The basic idea is to sequentially transfer knowledge from each trained model to the distilled model. Extensive experiments on Chinese-English and German-English datasets show that our method achieves significant and consistent improvements over strong baselines under both homogeneous and heterogeneous trained model settings and is robust to malicious models.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computation & Language
๐
๐
Old Age
๐
๐
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
๐ป
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
๐ป
Ghosted