A Rotation Invariant Latent Factor Model for Moveme Discovery from Static Poses
Abstract
We tackle the problem of learning a rotation invariant latent factor model when the training data is comprised of lower-dimensional projections of the original feature space. The main goal is the discovery of a set of 3-D bases poses that can characterize the manifold of primitive human motions, or movemes, from a training set of 2-D projected poses obtained from still images taken at various camera angles. The proposed technique for basis discovery is data-driven rather than hand-designed. The learned representation is rotation invariant, and can reconstruct any training instance from multiple viewing angles. We apply our method to modeling human poses in sports (via the Leeds Sports Dataset), and demonstrate the effectiveness of the learned bases in a range of applications such as activity classification, inference of dynamics from a single frame, and synthetic representation of movements.
Additional Information
© 2016 IEEE. Date Added to IEEE Xplore: 02 February 2017.Attached Files
Submitted - 1609.07495.pdf
Files
Name | Size | Download all |
---|---|---|
md5:ff7c183d74f17211f23b306c778339ab
|
3.6 MB | Preview Download |
Additional details
- Eprint ID
- 78272
- Resolver ID
- CaltechAUTHORS:20170616-103109457
- Created
-
2017-06-16Created from EPrint's datestamp field
- Updated
-
2021-11-15Created from EPrint's last_modified field
- Series Name
- IEEE International Conference on Data Mining