Fast Adjustable Threshold For Uniform Neural Network Quantization (Winning solution of LPIRC-II)
December 19, 2018 ยท Entered Twilight ยท ๐ International Work-Conference on Artificial and Natural Neural Networks
"Last commit was 7.0 years ago (โฅ5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: Prepare MNasNet weights.ipynb, README.md, Train Thresholds.ipynb, prepare_weights.py, requirements.txt, scripts, settings_config
Authors
Alexander Goncharenko, Andrey Denisov, Sergey Alyamkin, Evgeny Terentev
arXiv ID
1812.07872
Category
cs.LG: Machine Learning
Cross-listed
stat.ML
Citations
21
Venue
International Work-Conference on Artificial and Natural Neural Networks
Repository
https://github.com/agoncharenko1992/FAT-fast_adjustable_threshold
โญ 19
Last Checked
1 month ago
Abstract
Neural network quantization procedure is the necessary step for porting of neural networks to mobile devices. Quantization allows accelerating the inference, reducing memory consumption and model size. It can be performed without fine-tuning using calibration procedure (calculation of parameters necessary for quantization), or it is possible to train the network with quantization from scratch. Training with quantization from scratch on the labeled data is rather long and resource-consuming procedure. Quantization of network without fine-tuning leads to accuracy drop because of outliers which appear during the calibration. In this article we suggest to simplify the quantization procedure significantly by introducing the trained scale factors for quantization thresholds. It allows speeding up the process of quantization with fine-tuning up to 8 epochs as well as reducing the requirements to the set of train images. By our knowledge, the proposed method allowed us to get the first public available quantized version of MNAS without significant accuracy reduction - 74.8% vs 75.3% for original full-precision network. Model and code are ready for use and available at: https://github.com/agoncharenko1992/FAT-fast_adjustable_threshold.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Machine Learning
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
R.I.P.
๐ป
Ghosted
Semi-Supervised Classification with Graph Convolutional Networks
R.I.P.
๐ป
Ghosted
Proximal Policy Optimization Algorithms
R.I.P.
๐ป
Ghosted