Seq1F1B: Efficient Sequence-Level Pipeline Parallelism for Large Language Model Training

June 05, 2024 Β· Entered Twilight Β· πŸ› North American Chapter of the Association for Computational Linguistics

πŸ’€ TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: .coveragerc, .github, .gitignore, .gitlab-ci.yml, CONTRIBUTING.md, LICENSE, README.md, docs, examples, exp2.sh, images, megatron, picture, pretrain_bert.py, pretrain_gpt.py, pretrain_gpt_core.py, pretrain_ict.py, pretrain_retro.py, pretrain_t5.py, pretrain_vision_classify.py, pretrain_vision_dino.py, pretrain_vision_inpaint.py, pyproject.toml, setup.py, tasks, tests, tools

Authors Ao Sun, Weilin Zhao, Xu Han, Cheng Yang, Xinrong Zhang, Zhiyuan Liu, Chuan Shi, Maosong Sun arXiv ID 2406.03488 Category cs.DC: Distributed Computing Citations 13 Venue North American Chapter of the Association for Computational Linguistics Repository https://github.com/MayDomine/Seq1F1B.git ⭐ 19 Last Checked 1 month ago
Abstract
The emergence of large language models (LLMs) relies heavily on distributed training strategies, among which pipeline parallelism plays a crucial role. As LLMs' training sequence length extends to 32k or even 128k, the current pipeline parallel methods face severe bottlenecks, including high memory footprints and substantial pipeline bubbles, greatly hindering model scalability and training throughput. To enhance memory efficiency and training throughput, in this work, we introduce an efficient sequence-level one-forward-one-backward (1F1B) pipeline scheduling method tailored for training LLMs on long sequences named Seq1F1B. Seq1F1B decomposes batch-level schedulable units into finer sequence-level units, reducing bubble size and memory footprint. Considering that Seq1F1B may produce slight extra bubbles if sequences are split evenly, we design a computation-wise strategy to partition input sequences and mitigate this side effect. Compared to competitive pipeline baseline methods such as Megatron 1F1B pipeline parallelism, our method achieves higher training throughput with less memory footprint. Notably, Seq1F1B efficiently trains a LLM with 30B parameters on sequences up to 64k using 64 NVIDIA A100 GPUs without recomputation strategies, a feat unachievable with existing methods. Our source code is based on Megatron-LM, and now is avaiable at: https://github.com/MayDomine/Seq1F1B.git.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Distributed Computing