Latent Space Roadmap for Visual Action Planning of Deformable and Rigid Object Manipulation

March 19, 2020 ยท Entered Twilight ยท ๐Ÿ› IEEE/RJS International Conference on Intelligent RObots and Systems

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"No code URL or promise found in abstract"
"Derived repo from GitHub Pages (backfill)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, L1_hub.html, LICENSE.txt, README.md, README_web.txt, assets, elements.html, execution_videos_fold_01.html, execution_videos_fold_02.html, execution_videos_fold_03.html, execution_videos_fold_04.html, execution_videos_fold_05.html, files, generic.html, images, index.html, videos

Authors Martina Lippi, Petra Poklukar, Michael C. Welle, Anastasiia Varava, Hang Yin, Alessandro Marino, Danica Kragic arXiv ID 2003.08974 Category cs.RO: Robotics Cross-listed cs.LG Citations 60 Venue IEEE/RJS International Conference on Intelligent RObots and Systems Repository https://github.com/visual-action-planning/lsr Last Checked 7 days ago
Abstract
We present a framework for visual action planning of complex manipulation tasks with high-dimensional state spaces such as manipulation of deformable objects. Planning is performed in a low-dimensional latent state space that embeds images. We define and implement a Latent Space Roadmap (LSR) which is a graph-based structure that globally captures the latent system dynamics. Our framework consists of two main components: a Visual Foresight Module (VFM) that generates a visual plan as a sequence of images, and an Action Proposal Network (APN) that predicts the actions between them. We show the effectiveness of the method on a simulated box stacking task as well as a T-shirt folding task performed with a real robot.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Robotics