Post-Training 4-bit Quantization on Embedding Tables
November 05, 2019 ยท Entered Twilight ยท ๐ arXiv.org
"No code URL or promise found in abstract"
"Code repo scraped from project page (backfill)"
Evidence collected by the PWNC Scanner
Repo contents: .gitignore, Gemfile, _config.yml, _layouts, _sass, acceptedpapers.md, add_paper_links.py, assets, cfp.md, index.md, jekyll-theme-architect.gemspec, schedule.md, script, talks.md
Authors
Hui Guan, Andrey Malevich, Jiyan Yang, Jongsoo Park, Hector Yuen
arXiv ID
1911.02079
Category
cs.LG: Machine Learning
Cross-listed
cs.IR,
stat.ML
Citations
45
Venue
arXiv.org
Repository
https://github.com/LearningSys/neurips19
Last Checked
22 days ago
Abstract
Continuous representations have been widely adopted in recommender systems where a large number of entities are represented using embedding vectors. As the cardinality of the entities increases, the embedding components can easily contain millions of parameters and become the bottleneck in both storage and inference due to large memory consumption. This work focuses on post-training 4-bit quantization on the continuous embeddings. We propose row-wise uniform quantization with greedy search and codebook-based quantization that consistently outperforms state-of-the-art quantization approaches on reducing accuracy degradation. We deploy our uniform quantization technique on a production model in Facebook and demonstrate that it can reduce the model size to only 13.89% of the single-precision version while the model quality stays neutral.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Machine Learning
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
R.I.P.
๐ป
Ghosted
Semi-Supervised Classification with Graph Convolutional Networks
R.I.P.
๐ป
Ghosted
Proximal Policy Optimization Algorithms
R.I.P.
๐ป
Ghosted