๐
๐
Old Age
Automatic Correction of Human Translations
June 17, 2022 ยท Entered Twilight ยท ๐ North American Chapter of the Association for Computational Linguistics
Repo contents: .gitignore, README.md, assets, data, eval_gleu.sh, eval_m2.sh, eval_sentence_level.sh, gleu, mosestokenizer, requirements.txt, scripts
Authors
Jessy Lin, Geza Kovacs, Aditya Shastry, Joern Wuebker, John DeNero
arXiv ID
2206.08593
Category
cs.CL: Computation & Language
Cross-listed
cs.LG
Citations
3
Venue
North American Chapter of the Association for Computational Linguistics
Repository
https://github.com/lilt/tec
โญ 19
Last Checked
1 month ago
Abstract
We introduce translation error correction (TEC), the task of automatically correcting human-generated translations. Imperfections in machine translations (MT) have long motivated systems for improving translations post-hoc with automatic post-editing. In contrast, little attention has been devoted to the problem of automatically correcting human translations, despite the intuition that humans make distinct errors that machines would be well-suited to assist with, from typos to inconsistencies in translation conventions. To investigate this, we build and release the Aced corpus with three TEC datasets. We show that human errors in TEC exhibit a more diverse range of errors and far fewer translation fluency errors than the MT errors in automatic post-editing datasets, suggesting the need for dedicated TEC models that are specialized to correct human errors. We show that pre-training instead on synthetic errors based on human errors improves TEC F-score by as much as 5.1 points. We conducted a human-in-the-loop user study with nine professional translation editors and found that the assistance of our TEC system led them to produce significantly higher quality revised translations.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computation & Language
๐
๐
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
๐ป
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
๐ป
Ghosted