R.I.P.
๐ป
Ghosted
Anti-Exploration by Random Network Distillation
January 31, 2023 ยท Entered Twilight ยท ๐ International Conference on Machine Learning
Repo contents: .gitignore, Dockerfile, LICENSE, README.md, configs, offline_sac, requirements.txt
Authors
Alexander Nikulin, Vladislav Kurenkov, Denis Tarasov, Sergey Kolesnikov
arXiv ID
2301.13616
Category
cs.LG: Machine Learning
Cross-listed
cs.AI,
cs.NE
Citations
48
Venue
International Conference on Machine Learning
Repository
https://github.com/tinkoff-ai/sac-rnd
โญ 56
Last Checked
1 month ago
Abstract
Despite the success of Random Network Distillation (RND) in various domains, it was shown as not discriminative enough to be used as an uncertainty estimator for penalizing out-of-distribution actions in offline reinforcement learning. In this paper, we revisit these results and show that, with a naive choice of conditioning for the RND prior, it becomes infeasible for the actor to effectively minimize the anti-exploration bonus and discriminativity is not an issue. We show that this limitation can be avoided with conditioning based on Feature-wise Linear Modulation (FiLM), resulting in a simple and efficient ensemble-free algorithm based on Soft Actor-Critic. We evaluate it on the D4RL benchmark, showing that it is capable of achieving performance comparable to ensemble-based methods and outperforming ensemble-free approaches by a wide margin.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Machine Learning
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
R.I.P.
๐ป
Ghosted
Semi-Supervised Classification with Graph Convolutional Networks
R.I.P.
๐ป
Ghosted
Proximal Policy Optimization Algorithms
R.I.P.
๐ป
Ghosted