3D Shape Reconstruction from Vision and Touch
Abstract
When a toddler is presented a new toy, their instinctual behaviour is to pick it upand inspect it with their hand and eyes in tandem, clearly searching over its surface to properly understand what they are playing with. At any instance here, touch provides high fidelity localized information while vision provides complementary global context. However, in 3D shape reconstruction, the complementary fusion of visual and haptic modalities remains largely unexplored. In this paper, we study this problem and present an effective chart-based approach to multi-modal shape understanding which encourages a similar fusion vision and touch http://information.to/ do so, we introduce a dataset of simulated touch and vision signals from the interaction between a robotic hand and a large array of 3D objects. Our results show that (1) leveraging both vision and touch signals consistently improves single-modality baselines; (2) our approach outperforms alternative modality fusion methods and strongly benefits from the proposed chart-based structure; (3) there construction quality increases with the number of grasps provided; and (4) the touch information not only enhances the reconstruction at the touch site but also extrapolates to its local neighborhood.
Additional Information
We would like to acknowledge the NSERC Canadian Robotics Network, the Natural Sciences and Engineering Research Council, and the Fonds de recherche du Québec – Nature et Technologies for their funding support, as granted to the McGill University authors. We would also like to thank Scott Fujimoto and Shaoxiong Wang for their helpful feedback.Attached Files
Accepted Version - 2007.03778.pdf
Files
Name | Size | Download all |
---|---|---|
md5:4bd4621f1e9e8b641b09680b10e413d0
|
7.3 MB | Preview Download |
Additional details
- Eprint ID
- 118420
- Resolver ID
- CaltechAUTHORS:20221219-204806086
- Natural Sciences and Engineering Research Council of Canada (NSERC)
- Fonds de recherche du Québec - Nature et technologies (FRQNT)
- Created
-
2022-12-20Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field