Post-Training 4-bit Quantization on Embedding Tables

November 05, 2019 ยท Entered Twilight ยท ๐Ÿ› arXiv.org

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"No code URL or promise found in abstract"
"Code repo scraped from project page (backfill)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, Gemfile, _config.yml, _layouts, _sass, acceptedpapers.md, add_paper_links.py, assets, cfp.md, index.md, jekyll-theme-architect.gemspec, schedule.md, script, talks.md

Authors Hui Guan, Andrey Malevich, Jiyan Yang, Jongsoo Park, Hector Yuen arXiv ID 1911.02079 Category cs.LG: Machine Learning Cross-listed cs.IR, stat.ML Citations 45 Venue arXiv.org Repository https://github.com/LearningSys/neurips19 Last Checked 22 days ago
Abstract
Continuous representations have been widely adopted in recommender systems where a large number of entities are represented using embedding vectors. As the cardinality of the entities increases, the embedding components can easily contain millions of parameters and become the bottleneck in both storage and inference due to large memory consumption. This work focuses on post-training 4-bit quantization on the continuous embeddings. We propose row-wise uniform quantization with greedy search and codebook-based quantization that consistently outperforms state-of-the-art quantization approaches on reducing accuracy degradation. We deploy our uniform quantization technique on a production model in Facebook and demonstrate that it can reduce the model size to only 13.89% of the single-precision version while the model quality stays neutral.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning