BERTIN: Efficient Pre-Training of a Spanish Language Model using Perplexity Sampling

July 14, 2022 Β· Declared Dead Β· πŸ› Proces. del Leng. Natural

πŸ’€ CAUSE OF DEATH: 404 Not Found
Code link is broken/dead
Authors Javier de la Rosa, Eduardo G. Ponferrada, Paulo Villegas, Pablo Gonzalez de Prado Salas, Manu Romero, MarΔ±a Grandury arXiv ID 2207.06814 Category cs.CL: Computation & Language Cross-listed cs.AI Citations 109 Venue Proces. del Leng. Natural Repository https://huggingface.co/bertin-project}{URL}$ Last Checked 1 month ago
Abstract
The pre-training of large language models usually requires massive amounts of resources, both in terms of computation and data. Frequently used web sources such as Common Crawl might contain enough noise to make this pre-training sub-optimal. In this work, we experiment with different sampling methods from the Spanish version of mC4, and present a novel data-centric technique which we name $\textit{perplexity sampling}$ that enables the pre-training of language models in roughly half the amount of steps and using one fifth of the data. The resulting models are comparable to the current state-of-the-art, and even achieve better results for certain tasks. Our work is proof of the versatility of Transformers, and paves the way for small teams to train their models on a limited budget. Our models are available at this $\href{https://huggingface.co/bertin-project}{URL}$.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Computation & Language

πŸŒ… πŸŒ… Old Age

Attention Is All You Need

Ashish Vaswani, Noam Shazeer, ... (+6 more)

cs.CL πŸ› NeurIPS πŸ“š 166.0K cites 8 years ago

Died the same way β€” πŸ’€ 404 Not Found