Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published December 18, 2015 | public
Book Section - Chapter

Time-Varying Surface Reconstruction of an Actor's Performance

Abstract

We propose a fully automatic time-varying surface reconstruction of an actor's performance captured from a production stage through omnidirectional video. The resulting mesh and its texture can then directly be edited in post-production. Our method makes no assumption on the costumes or accessories present in the recording. We take as input a raw sequence of volumetric static poses reconstructed from video sequences acquired in a multi-viewpoint chroma-key studio. The first frame is chosen as the reference mesh. An iterative approach is applied throughout the sequence in order to induce a deformation of the reference mesh for all input frames. At first, a pseudo-rigid transformation adjusts the pose to match the input visual hull as closely as possible. Then, local deformation is added to reconstruct fine details. We provide examples of actors' performance inserted into virtual scenes, including dynamic interaction with the environment.

Additional Information

© 2015 Springer International Publishing. M. Desbrun is partially funded by the National Science Foundation (CCF-1011944 grant), and gratefully acknowledges being hosted by the TITANE team in the context of an INRIA International Chair. We would like to thank our partner XD Productions. This work has been carried out thanks to the support of the RECOVER3D project, funded by the Investissements d'Avenir program. Some of the captured performance data were provided courtesy of the Max-Planck-Center for Visual Computing and Communication (MPI Informatik/Stanford) and Morpheo research team of INRIA.

Additional details

Created:
August 20, 2023
Modified:
January 13, 2024