R.I.P.
π»
Ghosted
Truly Deterministic Policy Optimization
May 30, 2022 Β· Entered Twilight Β· π Neural Information Processing Systems
Repo contents: .gitignore, README.md, agents, bench, cleg, envs, opt, requirements.txt, train.py, train.sh, xpo
Authors
Ehsan Saleh, Saba Ghaffari, Timothy Bretl, Matthew West
arXiv ID
2205.15379
Category
cs.AI: Artificial Intelligence
Cross-listed
cs.LG,
cs.RO,
eess.SY
Citations
3
Venue
Neural Information Processing Systems
Repository
https://github.com/ehsansaleh/code_tdpo
β 5
Last Checked
1 month ago
Abstract
In this paper, we present a policy gradient method that avoids exploratory noise injection and performs policy search over the deterministic landscape. By avoiding noise injection all sources of estimation variance can be eliminated in systems with deterministic dynamics (up to the initial state distribution). Since deterministic policy regularization is impossible using traditional non-metric measures such as the KL divergence, we derive a Wasserstein-based quadratic model for our purposes. We state conditions on the system model under which it is possible to establish a monotonic policy improvement guarantee, propose a surrogate function for policy gradient estimation, and show that it is possible to compute exact advantage estimates if both the state transition model and the policy are deterministic. Finally, we describe two novel robotic control environments -- one with non-local rewards in the frequency domain and the other with a long horizon (8000 time-steps) -- for which our policy gradient method (TDPO) significantly outperforms existing methods (PPO, TRPO, DDPG, and TD3). Our implementation with all the experimental settings is available at https://github.com/ehsansaleh/code_tdpo
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Artificial Intelligence
R.I.P.
π»
Ghosted
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
R.I.P.
π»
Ghosted
Addressing Function Approximation Error in Actor-Critic Methods
R.I.P.
π»
Ghosted
Explanation in Artificial Intelligence: Insights from the Social Sciences
R.I.P.
π»
Ghosted
Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge
R.I.P.
π»
Ghosted