BKinD-3D: Self-Supervised 3D Keypoint Discovery from Multi-View Videos
Abstract
Quantifying motion in 3D is important for studying the behavior of humans and other animals, but manual pose annotations are expensive and time-consuming to obtain. Self-supervised keypoint discovery is a promising strategy for estimating 3D poses without annotations. However, current keypoint discovery approaches commonly process single 2D views and do not operate in the 3D space. We propose a new method to perform self-supervised keypoint discovery in 3D from multi-view videos of behaving agents, without any keypoint or bounding box supervision in 2D or 3D. Our method uses an encoder-decoder architecture with a 3D volumetric heatmap, trained to reconstruct spatiotemporal differences across multiple views, in addition to joint length constraints on a learned 3D skeleton of the subject. In this way, we discover keypoints without requiring manual supervision in videos of humans and rats, demonstrating the potential of 3D keypoint discovery for studying behavior.
Additional Information
This work is generously supported by the Amazon AI4Science Fellowship (to JJS), NIH NINDS (R01NS102333 to JCT), and the Air Force Office of Scientific Research (AFOSR FA9550-19-1-0386 to BWB).Attached Files
Submitted - 2212.07401.pdf
Files
Name | Size | Download all |
---|---|---|
md5:be9868bd8539a5c73a30824f223119e1
|
4.5 MB | Preview Download |
Additional details
- Eprint ID
- 118408
- Resolver ID
- CaltechAUTHORS:20221219-204745839
- Amazon AI4Science Fellowship
- NIH
- R01NS102333
- Air Force Office of Scientific Research (AFOSR)
- FA9550-19-1-0386
- Created
-
2022-12-20Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field