Visual Dynamics: Stochastic Future Generation via Layered Cross Convolutional Networks
Abstract
We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods that have tackled this problem in a deterministic or non-parametric way, we propose to model future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. To synthesize realistic movement of objects, we propose a novel network structure, namely a Cross Convolutional Network; this network encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, and on real-world video frames. We present analyses of the learned network representations, showing it is implicitly learning a compact encoding of object appearance and motion. We also demonstrate a few of its applications, including visual analogy-making and video extrapolation.
Additional Information
© 2018 IEEE. Manuscript received 7 Sept. 2017; revised 16 June 2018; accepted 26 June 2018. Date of publication 9 July 2018; date of current version 13 Aug. 2019. We thank Zhijian Liu and Yining Wang for helpful discussions and anonymous reviewers for constructive comments. This work was supported by NSF Robust Intelligence 1212849, NSF Big Data 1447476, ONR MURI 6923196, Adobe, Shell Research, and a hardware donation from Nvidia. T. Xue and J. Wu contributed equally to this work.Additional details
- Eprint ID
- 94505
- Resolver ID
- CaltechAUTHORS:20190405-140148834
- NSF
- IIS-1212849
- NSF
- IIS-1447476
- Office of Naval Research (ONR)
- 6923196
- Adobe
- Shell Research
- nVidia
- Created
-
2019-04-05Created from EPrint's datestamp field
- Updated
-
2021-11-16Created from EPrint's last_modified field
- Caltech groups
- Astronomy Department