Accelerating Deep Convolutional Networks using low-precision and sparsity

October 02, 2016 · Declared Dead · 🏛 IEEE International Conference on Acoustics, Speech, and Signal Processing

⏳ CAUSE OF DEATH: Coming Soon™
Promised but never delivered

"Paper promises code 'coming soon'"

Evidence collected by the PWNC Scanner

Authors Ganesh Venkatesh, Eriko Nurvitadhi, Debbie Marr arXiv ID 1610.00324 Category cs.LG: Machine Learning Cross-listed cs.NE Citations 135 Venue IEEE International Conference on Acoustics, Speech, and Signal Processing Last Checked 1 month ago
Abstract
We explore techniques to significantly improve the compute efficiency and performance of Deep Convolution Networks without impacting their accuracy. To improve the compute efficiency, we focus on achieving high accuracy with extremely low-precision (2-bit) weight networks, and to accelerate the execution time, we aggressively skip operations on zero-values. We achieve the highest reported accuracy of 76.6% Top-1/93% Top-5 on the Imagenet object classification challenge with low-precision network\footnote{github release of the source code coming soon} while reducing the compute requirement by ~3x compared to a full-precision network that achieves similar accuracy. Furthermore, to fully exploit the benefits of our low-precision networks, we build a deep learning accelerator core, dLAC, that can achieve up to 1 TFLOP/mm^2 equivalent for single-precision floating-point operations (~2 TFLOP/mm^2 for half-precision).
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

📜 Similar Papers

In the same crypt — Machine Learning

Died the same way — ⏳ Coming Soon™