Bayesian Sparsification of Gated Recurrent Neural Networks

December 12, 2018 ยท Entered Twilight ยท ๐Ÿ› arXiv.org

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 7.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: Code, Data, Experiments, LICENSE, Posters, README.md

Authors Ekaterina Lobacheva, Nadezhda Chirkova, Dmitry Vetrov arXiv ID 1812.05692 Category cs.LG: Machine Learning Cross-listed cs.CL, stat.ML Citations 2 Venue arXiv.org Repository https://github.com/tipt0p/SparseBayesianRNN โญ 16 Last Checked 2 months ago
Abstract
Bayesian methods have been successfully applied to sparsify weights of neural networks and to remove structure units from the networks, e. g. neurons. We apply and further develop this approach for gated recurrent architectures. Specifically, in addition to sparsification of individual weights and neurons, we propose to sparsify preactivations of gates and information flow in LSTM. It makes some gates and information flow components constant, speeds up forward pass and improves compression. Moreover, the resulting structure of gate sparsity is interpretable and depends on the task. Code is available on github: https://github.com/tipt0p/SparseBayesianRNN
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning