Emptying, Refurnishing, and Relighting Indoor Spaces

Edward Zhang1     Michael F. Cohen2     Brian Curless1

1 University of Washington     2 Facebook Inc.

SIGGRAPH Asia 2016




Abstract

Visualizing changes to indoor scenes is important for many applications. When looking for a new place to live, we want to see how the interior looks not with the current inhabitant's belongings, but with our own furniture. Before purchasing a new sofa, we want to visualize how it would look in our living room. In this paper, we present a system that takes an RGBD scan of an indoor scene and produces a scene model of the empty room, including light emitters, materials, and the geometry of the non-cluttered room. Our system enables realistic rendering not only of the empty room under the original lighting conditions, but also with various scene edits, including adding furniture, changing the material properties of the walls, and relighting. These types of scene edits enable many mixed reality applications in areas such as real estate, furniture retail, and interior design. Our system contains two novel technical contributions: a 3D radiometric calibration process that recovers the appearance of the scene in high dynamic range, and a global-illumination-aware inverse rendering framework that simultaneously recovers reflectance properties of scene surfaces and lighting properties for several light source types, including generalized point and line lights.

Paper

Edward Zhang, Michael F. Cohen, and Brian Curless. 2016.
"Emptying, Refurnishing, and Relighting Indoor Spaces"
ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2016)
(PDF  |  BibTex  |  ACM DL)

Data

Several of our datasets with radiometric calibration can be viewed at https://sketchfab.com/kyzyx/collections/hdr-room-scans.

Acknowledgements

We would like to thank Sameer Agarwal for his advice on optimization and Ceres Solver. We also thank Pratheba Selvaraju for her assistance in modelling the contents of refurnished scenes. These contents include 3D models from CGTrader (users scopia, den_krasik, buchak72, belgrade_sim, peter_janov) and Turbosquid (user shop3ds). Several additional 3D models were obtained from the Stanford Computer Graphics Laboratory.

This work was supported by the NSF/Intel Visual and Experimental Computing Award #1538618, with additional support from Google, Microsoft, Pixar, and the University of Washington Animation Research Labs.