R.I.P.
๐ป
Ghosted
Bayesian Sparsification of Gated Recurrent Neural Networks
December 12, 2018 ยท Entered Twilight ยท ๐ arXiv.org
"Last commit was 7.0 years ago (โฅ5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: Code, Data, Experiments, LICENSE, Posters, README.md
Authors
Ekaterina Lobacheva, Nadezhda Chirkova, Dmitry Vetrov
arXiv ID
1812.05692
Category
cs.LG: Machine Learning
Cross-listed
cs.CL,
stat.ML
Citations
2
Venue
arXiv.org
Repository
https://github.com/tipt0p/SparseBayesianRNN
โญ 16
Last Checked
2 months ago
Abstract
Bayesian methods have been successfully applied to sparsify weights of neural networks and to remove structure units from the networks, e. g. neurons. We apply and further develop this approach for gated recurrent architectures. Specifically, in addition to sparsification of individual weights and neurons, we propose to sparsify preactivations of gates and information flow in LSTM. It makes some gates and information flow components constant, speeds up forward pass and improves compression. Moreover, the resulting structure of gate sparsity is interpretable and depends on the task. Code is available on github: https://github.com/tipt0p/SparseBayesianRNN
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Machine Learning
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
R.I.P.
๐ป
Ghosted
Semi-Supervised Classification with Graph Convolutional Networks
R.I.P.
๐ป
Ghosted
Proximal Policy Optimization Algorithms
R.I.P.
๐ป
Ghosted