Learning Sparse & Ternary Neural Networks with Entropy-Constrained Trained Ternarization (EC2T)

April 02, 2020 ยท Entered Twilight ยท ๐Ÿ› 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: LICENSE, README.md, environment.yml, frozen_assignment_retraining, imgs, masterthesis.pdf, model, model_compound_scaling, model_quantization, model_scoring, requirements.txt, run_compound_scaling.py, run_frozen_retraining.py, run_quantization.py, run_scoring.py

Authors Arturo Marban, Daniel Becking, Simon Wiedemann, Wojciech Samek arXiv ID 2004.01077 Category cs.LG: Machine Learning Cross-listed cs.IT, stat.ML Citations 14 Venue 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) Repository https://github.com/d-becking/efficientCNNs โญ 6 Last Checked 1 month ago
Abstract
Deep neural networks (DNN) have shown remarkable success in a variety of machine learning applications. The capacity of these models (i.e., number of parameters), endows them with expressive power and allows them to reach the desired performance. In recent years, there is an increasing interest in deploying DNNs to resource-constrained devices (i.e., mobile devices) with limited energy, memory, and computational budget. To address this problem, we propose Entropy-Constrained Trained Ternarization (EC2T), a general framework to create sparse and ternary neural networks which are efficient in terms of storage (e.g., at most two binary-masks and two full-precision values are required to save a weight matrix) and computation (e.g., MAC operations are reduced to a few accumulations plus two multiplications). This approach consists of two steps. First, a super-network is created by scaling the dimensions of a pre-trained model (i.e., its width and depth). Subsequently, this super-network is simultaneously pruned (using an entropy constraint) and quantized (that is, ternary values are assigned layer-wise) in a training process, resulting in a sparse and ternary network representation. We validate the proposed approach in CIFAR-10, CIFAR-100, and ImageNet datasets, showing its effectiveness in image classification tasks.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning