Truly Deterministic Policy Optimization

May 30, 2022 Β· Entered Twilight Β· πŸ› Neural Information Processing Systems

πŸ’€ TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: .gitignore, README.md, agents, bench, cleg, envs, opt, requirements.txt, train.py, train.sh, xpo

Authors Ehsan Saleh, Saba Ghaffari, Timothy Bretl, Matthew West arXiv ID 2205.15379 Category cs.AI: Artificial Intelligence Cross-listed cs.LG, cs.RO, eess.SY Citations 3 Venue Neural Information Processing Systems Repository https://github.com/ehsansaleh/code_tdpo ⭐ 5 Last Checked 1 month ago
Abstract
In this paper, we present a policy gradient method that avoids exploratory noise injection and performs policy search over the deterministic landscape. By avoiding noise injection all sources of estimation variance can be eliminated in systems with deterministic dynamics (up to the initial state distribution). Since deterministic policy regularization is impossible using traditional non-metric measures such as the KL divergence, we derive a Wasserstein-based quadratic model for our purposes. We state conditions on the system model under which it is possible to establish a monotonic policy improvement guarantee, propose a surrogate function for policy gradient estimation, and show that it is possible to compute exact advantage estimates if both the state transition model and the policy are deterministic. Finally, we describe two novel robotic control environments -- one with non-local rewards in the frequency domain and the other with a long horizon (8000 time-steps) -- for which our policy gradient method (TDPO) significantly outperforms existing methods (PPO, TRPO, DDPG, and TD3). Our implementation with all the experimental settings is available at https://github.com/ehsansaleh/code_tdpo
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Artificial Intelligence