Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published December 20, 2022 | Accepted Version
Report Open

Learning 3D Object Shape and Layout without 3D Supervision

Abstract

A 3D scene consists of a set of objects, each with a shape and a layout giving their position in space. Understanding 3D scenes from 2D images is an important goal, with applications in robotics and graphics. While there have been recent advances in predicting 3D shape and layout from a single image, most approaches rely on 3D ground truth for training which is expensive to collect at scale. We overcome these limitations and propose a method that learns to predict 3D shape and layout for objects without any ground truth shape or layout information: instead we rely on multi-view images with 2D supervision which can more easily be collected at scale. Through extensive experiments on 3D Warehouse, Hypersim, and ScanNet we demonstrate that our approach scales to large datasets of realistic images, and compares favorably to methods relying on 3D ground truth. On Hypersim and ScanNet where reliable 3D ground truth is not available, our approach outperforms supervised approaches trained on smaller and less diverse datasets.

Additional Information

Attribution 4.0 International (CC BY 4.0).

Attached Files

Accepted Version - 2206.07028.pdf

Files

2206.07028.pdf
Files (19.9 MB)
Name Size Download all
md5:6d38ca74f4b2614cfcea4ecb2848130d
19.9 MB Preview Download

Additional details

Created:
August 20, 2023
Modified:
October 24, 2023