A Multilingual Neural Machine Translation Model for Biomedical Data
August 06, 2020 Β· Entered Twilight Β· π NLP4COVID@EMNLP
"Last commit was 5.0 years ago (β₯5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: .gitattributes, LICENSE_NMT-Model.txt, README.md, benchmarks.md, model, modelcard.md, test_sets
Authors
Alexandre BΓ©rard, Zae Myung Kim, Vassilina Nikoulina, Eunjeong L. Park, Matthias GallΓ©
arXiv ID
2008.02878
Category
cs.CL: Computation & Language
Cross-listed
cs.LG
Citations
15
Venue
NLP4COVID@EMNLP
Repository
https://github.com/naver/covid19-nmt
β 40
Last Checked
1 month ago
Abstract
We release a multilingual neural machine translation model, which can be used to translate text in the biomedical domain. The model can translate from 5 languages (French, German, Italian, Korean and Spanish) into English. It is trained with large amounts of generic and biomedical data, using domain tags. Our benchmarks show that it performs near state-of-the-art both on news (generic domain) and biomedical test sets, and that it outperforms the existing publicly released models. We believe that this release will help the large-scale multilingual analysis of the digital content of the COVID-19 crisis and of its effects on society, economy, and healthcare policies. We also release a test set of biomedical text for Korean-English. It consists of 758 sentences from official guidelines and recent papers, all about COVID-19.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Computation & Language
π
π
Old Age
π
π
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
π»
Ghosted
Language Models are Few-Shot Learners
R.I.P.
π»
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
π»
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
π»
Ghosted