Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published December 21, 2022 | Accepted Version
Report Open

SynSin: End-to-end View Synthesis from a Single Image

Abstract

Single image view synthesis allows for the generation of new views of a scene given a single input image. This is challenging, as it requires comprehensively understanding the 3D scene from a single image. As a result, current methods typically use multiple images, train on ground-truth depth, or are limited to synthetic data. We propose a novel end-to-end model for this task; it is trained on real images without any ground-truth 3D information. To this end, we introduce a novel differentiable point cloud renderer that is used to transform a latent 3D point cloud of features into the target view. The projected features are decoded by our refinement network to inpaint missing regions and generate a realistic output image. The 3D component inside of our generative model allows for interpretable manipulation of the latent feature space at test time, e.g. we can animate trajectories from a single image. Unlike prior work, we can generate high resolution images and generalise to other input resolutions. We outperform baselines and prior work on the Matterport, Replica, and RealEstate10K datasets.

Additional Information

The authors thank Johannes Kopf for sharing code, Manolis Savva and Erik Wijmans for help with the Habitat dataset, and Sebastien Ehrhardt, Oliver Groth, and Weidi Xie for feedback on paper drafts.

Attached Files

Accepted Version - 1912.08804.pdf

Files

1912.08804.pdf
Files (9.3 MB)
Name Size Download all
md5:d2c802a36f0e070bd481a4d3e4017fb5
9.3 MB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 24, 2023