Our goal in this work is to generate realistic videos given just one initial frame as input. Existing unsupervised approaches to this task do not consider the fact that a video typically shows a 3D environment, and that this should remain coherent from frame to frame even as the camera and objects move. We address this by developing a model that first estimates the latent 3D structure of the scene, including the segmentation of any moving objects. It then predicts future frames by simulating the object and camera dynamics, and rendering the resulting views. Importantly, it is trained end-to-end using only the unsupervised objective of predicting future frames, without any 3D information nor segmentation annotations. Experiments on two challenging datasets of natural videos show that our model can estimate 3D structure and motion segmentation from a single frame, and hence generate plausible and varied predictions.


Here we show animated versions of the results figures from the paper. More coming soon!

Generation results on WAYMO (Figure 3)
Generation results on RE10K (Figure 4)
  title={Unsupervised Video Prediction from a Single Frame by Estimating {3D} Dynamic Scene Structure},
  author={Henderson, Paul and Lampert, Christoph H. and Bickel, Bernd},