Effective Approaches to Batch Parallelization for Dynamic Neural Network Architectures

July 08, 2017 ยท Entered Twilight ยท ๐Ÿ› arXiv.org

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 8.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: ClevrBatcher.py, MOE.py, Net.py, Preprocess.py, README.md, images, lib, model, preprocess, svim.sh, vis

Authors Joseph Suarez, Clare Zhu arXiv ID 1707.02402 Category cs.CV: Computer Vision Citations 0 Venue arXiv.org Repository https://github.com/jsuarez5341/Efficient-Dynamic-Batching โญ 40 Last Checked 2 months ago
Abstract
We present a simple dynamic batching approach applicable to a large class of dynamic architectures that consistently yields speedups of over 10x. We provide performance bounds when the architecture is not known a priori and a stronger bound in the special case where the architecture is a predetermined balanced tree. We evaluate our approach on Johnson et al.'s recent visual question answering (VQA) result of his CLEVR dataset by Inferring and Executing Programs (IEP). We also evaluate on sparsely gated mixture of experts layers and achieve speedups of up to 1000x over the naive implementation.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision