Analog CMOS-based Resistive Processing Unit for Deep Neural Network Training
June 20, 2017 ยท Declared Dead ยท ๐ Midwest Symposium on Circuits and Systems
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Seyoung Kim, Tayfun Gokmen, Hyung-Min Lee, Wilfried E. Haensch
arXiv ID
1706.06620
Category
cs.ET: Emerging Technologies
Cross-listed
cs.LG
Citations
49
Venue
Midwest Symposium on Circuits and Systems
Last Checked
1 month ago
Abstract
Recently we have shown that an architecture based on resistive processing unit (RPU) devices has potential to achieve significant acceleration in deep neural network (DNN) training compared to today's software-based DNN implementations running on CPU/GPU. However, currently available device candidates based on non-volatile memory technologies do not satisfy all the requirements to realize the RPU concept. Here, we propose an analog CMOS-based RPU design (CMOS RPU) which can store and process data locally and can be operated in a massively parallel manner. We analyze various properties of the CMOS RPU to evaluate the functionality and feasibility for acceleration of DNN training.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Emerging Technologies
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
In-memory hyperdimensional computing
R.I.P.
๐ป
Ghosted
Magnetic skyrmion-based synaptic devices
R.I.P.
๐ป
Ghosted
Memristors -- from In-memory computing, Deep Learning Acceleration, Spiking Neural Networks, to the Future of Neuromorphic and Bio-inspired Computing
R.I.P.
๐ป
Ghosted
DNA-Based Storage: Trends and Methods
R.I.P.
๐ป
Ghosted
Neuro-memristive Circuits for Edge Computing: A review
Died the same way โ ๐ป Ghosted
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted