Fast Adjustable Threshold For Uniform Neural Network Quantization (Winning solution of LPIRC-II)

December 19, 2018 ยท Entered Twilight ยท ๐Ÿ› International Work-Conference on Artificial and Natural Neural Networks

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 7.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: Prepare MNasNet weights.ipynb, README.md, Train Thresholds.ipynb, prepare_weights.py, requirements.txt, scripts, settings_config

Authors Alexander Goncharenko, Andrey Denisov, Sergey Alyamkin, Evgeny Terentev arXiv ID 1812.07872 Category cs.LG: Machine Learning Cross-listed stat.ML Citations 21 Venue International Work-Conference on Artificial and Natural Neural Networks Repository https://github.com/agoncharenko1992/FAT-fast_adjustable_threshold โญ 19 Last Checked 1 month ago
Abstract
Neural network quantization procedure is the necessary step for porting of neural networks to mobile devices. Quantization allows accelerating the inference, reducing memory consumption and model size. It can be performed without fine-tuning using calibration procedure (calculation of parameters necessary for quantization), or it is possible to train the network with quantization from scratch. Training with quantization from scratch on the labeled data is rather long and resource-consuming procedure. Quantization of network without fine-tuning leads to accuracy drop because of outliers which appear during the calibration. In this article we suggest to simplify the quantization procedure significantly by introducing the trained scale factors for quantization thresholds. It allows speeding up the process of quantization with fine-tuning up to 8 epochs as well as reducing the requirements to the set of train images. By our knowledge, the proposed method allowed us to get the first public available quantized version of MNAS without significant accuracy reduction - 74.8% vs 75.3% for original full-precision network. Model and code are ready for use and available at: https://github.com/agoncharenko1992/FAT-fast_adjustable_threshold.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning