Hello Edge: Keyword Spotting on Microcontrollers

November 20, 2017 ยท Entered Twilight ยท ๐Ÿ› arXiv.org

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 7.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, Deployment, LICENSE, Pretrained_models, README.md, fold_batchnorm.py, freeze.py, input_data.py, label_wav.py, models.py, quant_models.py, quant_test.py, silence.wav, test.py, test_pb.py, train.py, train_commands.txt

Authors Yundong Zhang, Naveen Suda, Liangzhen Lai, Vikas Chandra arXiv ID 1711.07128 Category cs.SD: Sound Cross-listed cs.CL, cs.LG, cs.NE, eess.AS Citations 476 Venue arXiv.org Repository https://github.com/ARM-software/ML-KWS-for-MCU โญ 1226 Last Checked 1 month ago
Abstract
Keyword spotting (KWS) is a critical component for enabling speech based user interactions on smart devices. It requires real-time response and high accuracy for good user experience. Recently, neural networks have become an attractive choice for KWS architecture because of their superior accuracy compared to traditional speech processing algorithms. Due to its always-on nature, KWS application has highly constrained power budget and typically runs on tiny microcontrollers with limited memory and compute capability. The design of neural network architecture for KWS must consider these constraints. In this work, we perform neural network architecture evaluation and exploration for running KWS on resource-constrained microcontrollers. We train various neural network architectures for keyword spotting published in literature to compare their accuracy and memory/compute requirements. We show that it is possible to optimize these neural network architectures to fit within the memory and compute constraints of microcontrollers without sacrificing accuracy. We further explore the depthwise separable convolutional neural network (DS-CNN) and compare it against other neural network architectures. DS-CNN achieves an accuracy of 95.4%, which is ~10% higher than the DNN model with similar number of parameters.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Sound