Shape and Motion under Varying Illumination: 
Unifying Structure from Motion, Photometric Stereo, and Multi-view Stereo

Li Zhang, Brian Curless, Aaron Hertzmann, and Steven M. Seitz

Abstract

This paper presents an algorithm for computing optical flow, shape, motion, lighting, and albedo from an image sequence of a rigidly-moving Lambertian object under distant illumination. The problem is formulated in a manner that subsumes structure from motion, multi-view stereo, and photometric stereo as special cases. The algorithm utilizes both spatial and temporal intensity variation as cues: the former constrains flow and the latter constrains surface orientation; combining both cues enables dense reconstruction of both textured and texture-less surfaces. The algorithm works by iteratively estimating affine camera parameters, illumination, shape, and albedo in an alternating fashion. Results are demonstrated on videos of hand-held objects moving in front of a fixed light and camera.

Citation (bibTex)
Li Zhang, Brian Curless, Aaron Hertzmann, and Steven M. Seitz. Shape and Motion under Varying Illumination: Unifying Structure from Motion, Photometric Stereo, and Multi-view Stereo. In Proceedings of the 9th IEEE International Conference on Computer Vision (ICCV), Nice, France, Oct., 2003. [Paper: PDF(1.7M), PS.GZ(3.0M); Poster: PDF(1.7M)]


Results 

Rotating Figurine 

=>

Input sequence 
(AVI 1.9M)

Final surface consists of 20453 vertices, 
rendered from novel view points and light. 
(gray-shaded AVI 2.9M, albedo-mapped AVI 2.3M)

Coarse-to-fine reconstruction of the figurine

Rotating Box

=>

Input sequence 
(AVI 4.4M)

Final surface consists of 5345 vertices, rendered from novel view points and light. (gray-shaded AVI 1.9M)

 


See my previous work on Spacetime Stereo!