Comparing Fixed and Adaptive Computation Time for Recurrent Neural Networks
March 21, 2018 Β· Entered Twilight Β· π International Conference on Learning Representations
"No code URL or promise found in abstract"
"Derived repo from GitHub Pages (backfill)"
Evidence collected by the PWNC Scanner
Repo contents: .gitignore, LICENSE, README.md, Thesis.pdf, fojo-2018-iclrw-repeatrnn.pdf, src
Authors
Daniel Fojo, VΓctor Campos, Xavier Giro-i-Nieto
arXiv ID
1803.08165
Category
cs.NE: Neural & Evolutionary
Cross-listed
cs.LG
Citations
4
Venue
International Conference on Learning Representations
Repository
https://github.com/imatge-upc/danifojo-2018-repeatrnn
β 35
Last Checked
12 days ago
Abstract
Adaptive Computation Time for Recurrent Neural Networks (ACT) is one of the most promising architectures for variable computation. ACT adapts to the input sequence by being able to look at each sample more than once, and learn how many times it should do it. In this paper, we compare ACT to Repeat-RNN, a novel architecture based on repeating each sample a fixed number of times. We found surprising results, where Repeat-RNN performs as good as ACT in the selected tasks. Source code in TensorFlow and PyTorch is publicly available at https://imatge-upc.github.io/danifojo-2018-repeatrnn/
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Neural & Evolutionary
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
Progressive Growing of GANs for Improved Quality, Stability, and Variation
R.I.P.
π»
Ghosted
Learning both Weights and Connections for Efficient Neural Networks
R.I.P.
π»
Ghosted
LSTM: A Search Space Odyssey
R.I.P.
π»
Ghosted
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
R.I.P.
π»
Ghosted