A Rotation Invariant Latent Factor Model for Moveme Discovery from Static Poses

September 23, 2016 ยท Entered Twilight ยท ๐Ÿ› Industrial Conference on Data Mining

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"No code URL or promise found in abstract"
"Code repo scraped from project page (backfill)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, LICENSE, README.md, code, inputs

Authors Matteo Ruggero Ronchi, Joon Sik Kim, Yisong Yue arXiv ID 1609.07495 Category cs.CV: Computer Vision Cross-listed cs.LG Citations 4 Venue Industrial Conference on Data Mining Repository https://github.com/matteorr/rotation_invariant_movemes โญ 2 Last Checked 1 month ago
Abstract
We tackle the problem of learning a rotation invariant latent factor model when the training data is comprised of lower-dimensional projections of the original feature space. The main goal is the discovery of a set of 3-D bases poses that can characterize the manifold of primitive human motions, or movemes, from a training set of 2-D projected poses obtained from still images taken at various camera angles. The proposed technique for basis discovery is data-driven rather than hand-designed. The learned representation is rotation invariant, and can reconstruct any training instance from multiple viewing angles. We apply our method to modeling human poses in sports (via the Leeds Sports Dataset), and demonstrate the effectiveness of the learned bases in a range of applications such as activity classification, inference of dynamics from a single frame, and synthetic representation of movements.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision