Intelligent Systems

DeepVoxels: Learning Persistent 3D Feature Embeddings

2019

Conference Paper

ncs


In this work, we address the lack of 3D understanding of generative neural networks by introducing a persistent 3D feature embedding for view synthesis. To this end, we propose DeepVoxels, a learned representation that encodes the view-dependent appearance of a 3D object without having to explicitly model its geometry. At its core, our approach is based on a Cartesian 3D grid of persistent embedded features that learn to make use of the underlying 3D scene structure. Our approach thus combines insights from 3D geometric computer vision with recent advances in learning image-to-image mappings based on adversarial loss functions. DeepVoxels is supervised, without requiring a 3D reconstruction of the scene, using a 2D re-rendering loss and enforces perspective and multi-view geometry in a principled manner. We apply our persistent 3D scene representation to the problem of novel view synthesis demonstrating high-quality results for a variety of challenging objects.

Author(s): Sitzmann, V. and Thies, J. and Heide, F. and Nießner, M. and Wetzstein, G. and Zollhöfer
Book Title: Proc. Computer Vision and Pattern Recognition (CVPR), IEEE
Year: 2019

Department(s): Neural Capture and Synthesis
Bibtex Type: Conference Paper (inproceedings)

URL: https://justusthies.github.io/posts/deepvoxels/

Links: Paper
Video
Video:

BibTex

@inproceedings{sitzmann2019deepvoxels,
  title = {DeepVoxels: Learning Persistent 3D Feature Embeddings},
  author = {Sitzmann, V. and Thies, J. and Heide, F. and Nie{\ss}ner, M. and Wetzstein, G. and Zollh{\"o}fer},
  booktitle = {Proc. Computer Vision and Pattern Recognition (CVPR), IEEE},
  year = {2019},
  doi = {},
  url = {https://justusthies.github.io/posts/deepvoxels/}
}