torchgpipe: On-the-fly Pipeline Parallelism for Training Giant Models

April 21, 2020 Β· Entered Twilight Β· πŸ› arXiv.org

πŸŒ… TWILIGHT: Old Age
Predates the code-sharing era β€” a pioneer of its time

"Last commit was 5.0 years ago (β‰₯5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .github, .gitignore, .readthedocs.yml, .travis.yml, CONTRIBUTING.md, LICENSE, NOTICE, README.ko.md, README.md, benchmarks, docs, setup.cfg, setup.py, stubs, tests, torchgpipe, torchgpipe_balancing.py

Authors Chiheon Kim, Heungsub Lee, Myungryong Jeong, Woonhyuk Baek, Boogeon Yoon, Ildoo Kim, Sungbin Lim, Sungwoong Kim arXiv ID 2004.09910 Category cs.DC: Distributed Computing Cross-listed cs.LG Citations 60 Venue arXiv.org Repository https://github.com/kakaobrain/torchgpipe ⭐ 864 Last Checked 1 month ago
Abstract
We design and implement a ready-to-use library in PyTorch for performing micro-batch pipeline parallelism with checkpointing proposed by GPipe (Huang et al., 2019). In particular, we develop a set of design components to enable pipeline-parallel gradient computation in PyTorch's define-by-run and eager execution environment. We show that each component is necessary to fully benefit from pipeline parallelism in such environment, and demonstrate the efficiency of the library by applying it to various network architectures including AmoebaNet-D and U-Net. Our library is available at https://github.com/kakaobrain/torchgpipe .
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Distributed Computing