Visualizing changes to indoor scenes is important for many applications.
When looking for a new
place to live, we want to see how the interior looks not with the
current inhabitant's belongings, but with our own furniture.
Before purchasing a new sofa,
we want to visualize how it would look in our living room.
In this paper, we present a system that takes an RGBD scan of an indoor scene
and produces a scene model of the empty room, including light emitters, materials,
and the geometry of the non-cluttered room. Our system enables realistic
rendering not only of the empty room under the original lighting conditions, but
also with various scene edits, including adding furniture, changing the
material properties of the walls, and relighting. These types of scene edits
enable many mixed reality applications in areas such as real estate, furniture
retail, and interior design. Our system contains two novel technical
contributions: a 3D radiometric calibration process that recovers the appearance
of the scene in high dynamic range, and a global-illumination-aware inverse
rendering framework that simultaneously recovers reflectance properties of
scene surfaces and lighting properties for several light source types,
including generalized point and line lights.
Paper
Edward Zhang, Michael F. Cohen, and Brian Curless. 2016.
"Emptying, Refurnishing, and Relighting Indoor Spaces" ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2016)
(PDF | BibTex | ACM DL)
The raw image data for several scans can be downloaded as well. Each zip file contains a set of the original autoexposed images (image????.png), the reconstructed room geometry (poisson.ply), and a camerafile (camerafile.cam). The plaintext camera file has a short header followed by a set of lines describing the camera orientation and calibrated per-channel exposure for each image, as follows:
Note that when using the exposures given in the camera file, be sure to apply them after linearization (which for this paper was assumed to be a gamma transform of exponent 2.2).
We would like to thank Sameer Agarwal for his advice on optimization and Ceres
Solver. We also thank Pratheba Selvaraju for her assistance in
modelling the contents of refurnished scenes. These contents include 3D models
from CGTrader (users scopia, den_krasik, buchak72, belgrade_sim, peter_janov)
and Turbosquid (user shop3ds). Several additional 3D models were obtained
from the Stanford Computer Graphics Laboratory.
This work was supported by the NSF/Intel Visual and Experiential Computing Award
#1538618, with additional support from Google, Microsoft, Pixar, and the
University of Washington Animation Research Labs.