Intelligent Systems

FaceVR: Real-Time Gaze-Aware Facial Reenactment in Virtual Reality

2018

Article

ncs


We propose FaceVR, a novel image-based method that enables video teleconferencing in VR based on self-reenactment. State-of-the-art face tracking methods in the VR context are focused on the animation of rigged 3d avatars. While they achieve good tracking performance the results look cartoonish and not real. In contrast to these model-based approaches, FaceVR enables VR teleconferencing using an image-based technique that results in nearly photo-realistic outputs. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos. Based on reenactment of a prerecorded stereo video of the person without the HMD, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances. For instance, we can alter facial expressions or change gaze directions in the prerecorded target video. In a live setup, we apply these newly-introduced algorithmic components.

Author(s): Thies, J. and Zollhöfer, M. and Stamminger, M. and Theobalt, C. and Nießner, M.
Journal: ACM Transactions on Graphics 2018 (TOG)
Year: 2018

Department(s): Neural Capture and Synthesis
Bibtex Type: Article (article)

URL: https://justusthies.github.io/posts/facevr/

Links: Paper
Video
Video:

BibTex

@article{thies2018facevr,
  title = {FaceVR: Real-Time Gaze-Aware Facial Reenactment in Virtual Reality},
  author = {Thies, J. and Zollh{\"o}fer, M. and Stamminger, M. and Theobalt, C. and Nie{\ss}ner, M.},
  journal = {ACM Transactions on Graphics 2018 (TOG)},
  year = {2018},
  doi = {},
  url = {https://justusthies.github.io/posts/facevr/}
}