Learning to Estimate 3D Hand Pose from Single RGB Images

May 03, 2017 ยท Entered Twilight ยท ๐Ÿ› IEEE International Conference on Computer Vision

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"No code URL or promise found in abstract"
"Code repo scraped from project page (backfill)"

Evidence collected by the PWNC Scanner

Repo contents: LICENSE, README.md, create_binary_db.py, data, eval2d.py, eval2d_gt_cropped.py, eval3d.py, eval_full.py, nets, run.py, teaser.png, training_handsegnet.py, training_lifting.py, training_posenet.py, utils

Authors Christian Zimmermann, Thomas Brox arXiv ID 1705.01389 Category cs.CV: Computer Vision Citations 777 Venue IEEE International Conference on Computer Vision Repository https://github.com/lmb-freiburg/hand3d โญ 814 Last Checked 6 days ago
Abstract
Low-cost consumer depth cameras and deep learning have enabled reasonable 3D hand pose estimation from single depth images. In this paper, we present an approach that estimates 3D hand pose from regular RGB images. This task has far more ambiguities due to the missing depth information. To this end, we propose a deep network that learns a network-implicit 3D articulation prior. Together with detected keypoints in the images, this network yields good estimates of the 3D pose. We introduce a large scale 3D hand pose dataset based on synthetic hand models for training the involved networks. Experiments on a variety of test sets, including one on sign language recognition, demonstrate the feasibility of 3D hand pose estimation on single color images.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision