Neural Dynamic Policies for End-to-End Sensorimotor Learning

December 04, 2020 ยท Entered Twilight ยท ๐Ÿ› Neural Information Processing Systems

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"No code URL or promise found in abstract"
"Derived repo from GitHub Pages (backfill)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, LICENSE, README.md, arguments.py, dmp, dnc, images, main_il.py, main_rl.py, metaworld, ndp.yaml, pytorch-a2c-ppo-acktr-gail, requirements.txt, run_rl.sh, vis_ndp_policy.py, vis_policy.sh

Authors Shikhar Bahl, Mustafa Mukadam, Abhinav Gupta, Deepak Pathak arXiv ID 2012.02788 Category cs.LG: Machine Learning Cross-listed cs.AI, cs.CV, cs.RO, stat.ML Citations 90 Venue Neural Information Processing Systems Repository https://github.com/shikharbahl/neural-dynamic-policies โญ 76 Last Checked 7 days ago
Abstract
The current dominant paradigm in sensorimotor control, whether imitation or reinforcement learning, is to train policies directly in raw action spaces such as torque, joint angle, or end-effector position. This forces the agent to make decisions individually at each timestep in training, and hence, limits the scalability to continuous, high-dimensional, and long-horizon tasks. In contrast, research in classical robotics has, for a long time, exploited dynamical systems as a policy representation to learn robot behaviors via demonstrations. These techniques, however, lack the flexibility and generalizability provided by deep learning or reinforcement learning and have remained under-explored in such settings. In this work, we begin to close this gap and embed the structure of a dynamical system into deep neural network-based policies by reparameterizing action spaces via second-order differential equations. We propose Neural Dynamic Policies (NDPs) that make predictions in trajectory distribution space as opposed to prior policy learning methods where actions represent the raw control space. The embedded structure allows end-to-end policy learning for both reinforcement and imitation learning setups. We show that NDPs outperform the prior state-of-the-art in terms of either efficiency or performance across several robotic control tasks for both imitation and reinforcement learning setups. Project video and code are available at https://shikharbahl.github.io/neural-dynamic-policies/
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning