Fast and Energy-Efficient CNN Inference on IoT Devices

November 22, 2016 Β· Entered Twilight Β· πŸ› arXiv.org

πŸŒ… TWILIGHT: Old Age
Predates the code-sharing era β€” a pioneer of its time

"Last commit was 8.0 years ago (β‰₯5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, .idea, Intermed_Results, Platforms, README.md, SqueezeNet, app, build.gradle, check.m, checkResults.m, data, gradle.properties, gradle, gradlew, gradlew.bat, loader.m, settings.gradle

Authors Mohammad Motamedi, Daniel Fong, Soheil Ghiasi arXiv ID 1611.07151 Category cs.DC: Distributed Computing Cross-listed cs.LG Citations 20 Venue arXiv.org Repository https://github.com/mtmd/Mobile_ConvNet ⭐ 52 Last Checked 1 month ago
Abstract
Convolutional Neural Networks (CNNs) exhibit remarkable performance in various machine learning tasks. As sensor-equipped internet of things (IoT) devices permeate into every aspect of modern life, it is increasingly important to run CNN inference, a computationally intensive application, on resource constrained devices. We present a technique for fast and energy-efficient CNN inference on mobile SoC platforms, which are projected to be a major player in the IoT space. We propose techniques for efficient parallelization of CNN inference targeting mobile GPUs, and explore the underlying tradeoffs. Experiments with running Squeezenet on three different mobile devices confirm the effectiveness of our approach. For further study, please refer to the project repository available on our GitHub page: https://github.com/mtmd/Mobile_ConvNet
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Distributed Computing