Effective Approaches to Batch Parallelization for Dynamic Neural Network Architectures
July 08, 2017 ยท Entered Twilight ยท ๐ arXiv.org
"Last commit was 8.0 years ago (โฅ5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: ClevrBatcher.py, MOE.py, Net.py, Preprocess.py, README.md, images, lib, model, preprocess, svim.sh, vis
Authors
Joseph Suarez, Clare Zhu
arXiv ID
1707.02402
Category
cs.CV: Computer Vision
Citations
0
Venue
arXiv.org
Repository
https://github.com/jsuarez5341/Efficient-Dynamic-Batching
โญ 40
Last Checked
2 months ago
Abstract
We present a simple dynamic batching approach applicable to a large class of dynamic architectures that consistently yields speedups of over 10x. We provide performance bounds when the architecture is not known a priori and a stronger bound in the special case where the architecture is a predetermined balanced tree. We evaluate our approach on Johnson et al.'s recent visual question answering (VQA) result of his CLEVR dataset by Inferring and Executing Programs (IEP). We also evaluate on sparsely gated mixture of experts layers and achieve speedups of up to 1000x over the naive implementation.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computer Vision
๐
๐
Old Age
๐
๐
Old Age
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
R.I.P.
๐ป
Ghosted
You Only Look Once: Unified, Real-Time Object Detection
๐
๐
Old Age
SSD: Single Shot MultiBox Detector
๐
๐
Old Age
Squeeze-and-Excitation Networks
R.I.P.
๐ป
Ghosted