Adaptive Transformers in RL
April 08, 2020 ยท Entered Twilight ยท ๐ arXiv.org
"Last commit was 5.0 years ago (โฅ5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: .gitignore, Implementations, Model, README.md, StableTransformersReplication, Transformer-XLCode, adaptive_span2, dqn.py, old_monobeast_test.py, old_transformer_xl.py, replayBuffer.py, requirements.txt, tester.py, torchbeast, train.py, transformerDqn.py
Authors
Shakti Kumar, Jerrod Parker, Panteha Naderian
arXiv ID
2004.03761
Category
cs.LG: Machine Learning
Cross-listed
cs.AI,
cs.NE
Citations
17
Venue
arXiv.org
Repository
https://github.com/jerrodparker20/adaptive-transformers-in-rl
โญ 136
Last Checked
1 month ago
Abstract
Recent developments in Transformers have opened new interesting areas of research in partially observable reinforcement learning tasks. Results from late 2019 showed that Transformers are able to outperform LSTMs on both memory intense and reactive tasks. In this work we first partially replicate the results shown in Stabilizing Transformers in RL on both reactive and memory based environments. We then show performance improvement coupled with reduced computation when adding adaptive attention span to this Stable Transformer on a challenging DMLab30 environment. The code for all our experiments and models is available at https://github.com/jerrodparker20/adaptive-transformers-in-rl.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Machine Learning
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
R.I.P.
๐ป
Ghosted
Semi-Supervised Classification with Graph Convolutional Networks
R.I.P.
๐ป
Ghosted
Proximal Policy Optimization Algorithms
R.I.P.
๐ป
Ghosted