The Visual Turing Test for Scene Reconstruction |
||||||
Qi Shan, Riley Adams, Brian Curless, Yasutaka Furukawa, and Steven M. Seitz | ||||||
3DV13 Paper [pdf (10M)] (Won the Best Paper Award) Supplementary file [pdf (9M)] Supplementary video [YouTube] [360p 29M] [720p 124M] Improved result of Colosseum (version June 2013) [YouTube] Visual Turing Test Images [Download 16M] |
||||||
Abstract: We present the first large scale system for capturing and rendering relightable scene reconstructions from massive unstructured photo collections taken under different illumination conditions and viewpoints. We combine photos taken from many sources, Flickr-based ground-level imagery, oblique aerial views, and streetview, to recover models that are significantly more complete and detailed than previously demonstrated. We demonstrate the ability to match both the viewpoint and illumination of arbitrary input photos, enabling a Visual Turing Test in which photo and rendering are viewed side-by-side and the observer has to guess which is which. While we cannot yet fool human perception, the gap is closing. | ||||||
Our Large Scale Models | ||||||
|
||||||
Rendered Images (Right) vs. Ground Truth Images (Left) |
||||||
*** End ***