Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts

February 10, 2020 Β· Entered Twilight Β· πŸ› Neural Information Processing Systems

πŸŒ… TWILIGHT: Old Age
Predates the code-sharing era β€” a pioneer of its time

"Last commit was 5.0 years ago (β‰₯5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, README.md, experiments, lib, requirements.txt, scheme.png, scheme_pad.png

Authors Max Ryabinin, Anton Gusev arXiv ID 2002.04013 Category cs.DC: Distributed Computing Cross-listed cs.LG, stat.ML Citations 61 Venue Neural Information Processing Systems Repository https://github.com/mryab/learning-at-home ⭐ 56 Last Checked 1 month ago
Abstract
Many recent breakthroughs in deep learning were achieved by training increasingly larger models on massive datasets. However, training such models can be prohibitively expensive. For instance, the cluster used to train GPT-3 costs over \$250 million. As a result, most researchers cannot afford to train state of the art models and contribute to their development. Hypothetically, a researcher could crowdsource the training of large neural networks with thousands of regular PCs provided by volunteers. The raw computing power of a hundred thousand \$2500 desktops dwarfs that of a \$250M server pod, but one cannot utilize that power efficiently with conventional distributed training methods. In this work, we propose Learning@home: a novel neural network training paradigm designed to handle large amounts of poorly connected participants. We analyze the performance, reliability, and architectural constraints of this paradigm and compare it against existing distributed training techniques.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Distributed Computing