Face2Face: Real-time Face Capture and Reenactment of RGB Videos

From Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt and Matthias Nießner:

We present a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination... (full paper)

 

Featured Product

 igus® - triflex® R robot dresspacks

igus® - triflex® R robot dresspacks

Properly managed cables on a multi-axis robot are the difference between successful, failure-free operation and frequent unplanned downtime and lost profits. Discover how triflex® robot dresspacks are designed to protect cables in multi-axis motion - extending cable life, minimizing costs, and reducing unplanned downtime.