Differentiable Stereopsis: Meshes from multiple views using differentiable rendering
Abstract
We propose Differentiable Stereopsis, a multi-view stereo approach that reconstructs shape and texture from few input views and noisy cameras. We pair traditional stereopsis and modern differentiable rendering to build an end-to-end model which predicts textured 3D meshes of objects with varying topologies and shape. We frame stereopsis as an optimization problem and simultaneously update shape and cameras via simple gradient descent. We run an extensive quantitative analysis and compare to traditional multi-view stereo techniques and state-of-the-art learning based methods. We show compelling reconstructions on challenging real-world scenes and for an abundance of object types with complex shape, topology and texture. Project webpage: https://shubham-goel.github.io/ds/
Additional Information
Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).Attached Files
Accepted Version - 2110.05472.pdf
Files
Name | Size | Download all |
---|---|---|
md5:3c841bdd090cffbdc24a0af811547882
|
21.2 MB | Preview Download |
Additional details
- Eprint ID
- 118414
- Resolver ID
- CaltechAUTHORS:20221219-204755957
- Created
-
2022-12-20Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field