A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings
May 16, 2018 ยท Entered Twilight ยท ๐ Annual Meeting of the Association for Computational Linguistics
"Last commit was 6.0 years ago (โฅ5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: .gitignore, LICENSE.txt, README.md, cupy_utils.py, embeddings.py, eval_analogy.py, eval_similarity.py, eval_translation.py, get_data.sh, map_embeddings.py, normalize_embeddings.py
Authors
Mikel Artetxe, Gorka Labaka, Eneko Agirre
arXiv ID
1805.06297
Category
cs.CL: Computation & Language
Cross-listed
cs.AI,
cs.LG
Citations
613
Venue
Annual Meeting of the Association for Computational Linguistics
Repository
https://github.com/artetxem/vecmap
โญ 653
Last Checked
1 month ago
Abstract
Recent work has managed to learn cross-lingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training. However, their evaluation has focused on favorable conditions, using comparable corpora or closely-related languages, and we show that they often fail in more realistic scenarios. This work proposes an alternative approach based on a fully unsupervised initialization that explicitly exploits the structural similarity of the embeddings, and a robust self-learning algorithm that iteratively improves this solution. Our method succeeds in all tested scenarios and obtains the best published results in standard datasets, even surpassing previous supervised systems. Our implementation is released as an open source project at https://github.com/artetxem/vecmap
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computation & Language
๐
๐
Old Age
๐
๐
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
๐ป
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
๐ป
Ghosted