Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published September 1, 2019 | public
Journal Article

Visual Dynamics: Stochastic Future Generation via Layered Cross Convolutional Networks

Abstract

We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods that have tackled this problem in a deterministic or non-parametric way, we propose to model future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. To synthesize realistic movement of objects, we propose a novel network structure, namely a Cross Convolutional Network; this network encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, and on real-world video frames. We present analyses of the learned network representations, showing it is implicitly learning a compact encoding of object appearance and motion. We also demonstrate a few of its applications, including visual analogy-making and video extrapolation.

Additional Information

© 2018 IEEE. Manuscript received 7 Sept. 2017; revised 16 June 2018; accepted 26 June 2018. Date of publication 9 July 2018; date of current version 13 Aug. 2019. We thank Zhijian Liu and Yining Wang for helpful discussions and anonymous reviewers for constructive comments. This work was supported by NSF Robust Intelligence 1212849, NSF Big Data 1447476, ONR MURI 6923196, Adobe, Shell Research, and a hardware donation from Nvidia. T. Xue and J. Wu contributed equally to this work.

Additional details

Created:
August 22, 2023
Modified:
October 20, 2023