Video Matting of Complex Scenes
Yung-Yu Chuang1
Aseem Agarwala1
Brian Curless1
David Salesin1,2
Richard Szeliski2
1University of Washington
2Microsoft Research
Abstract
This paper describes a new framework for video matting, the
process of pulling a high-quality alpha matte and foreground from a video
sequence. The framework builds upon techniques in natural image
matting, optical flow computation, and background estimation. User
interaction is comprised of garbage matte specification if background
estimation is needed, and hand-drawn keyframe segmentations into
``foreground,'' ``background,'' and ``unknown''. The segmentations,
called trimaps, are interpolated across the video volume using
forward and backward optical flow. Competing flow estimates are
combined based on information about where flow is likely to be accurate.
A Bayesian matting technique uses the flowed trimaps to yield
high-quality mattes of moving foreground elements with complex
boundaries filmed by a moving camera. A novel technique for smoke
matte extraction is also demonstrated.
Citation (bibTex)
Yung-Yu Chuang, Aseem Agarwala, Brian Curless, David H. Salesin, and Richard Szeliski.
Video Matting of Complex Scenes.
ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH 2002),
Vol. 21, No. 3, pages 243-248, July 2002
Paper
SIGGRAPH 2002 paper (3.0MB PDF)
Video in SIGGRAPH 2002 video proceedings
720x480 DivX Avi (52.9MB)
360x240 DivX Avi (39.2MB)
(Download DivX codec from www.divx.com)
Results
The results are best viewed in video form. Please watch the SIGGRAPH videos
linked above.
Background replacement
|
|
|
|
|
Input
| Composite
|
|
Background editing
|
|
|
|
|
Input
| Composite
|
|
Smoke
|
|
|
Input
| Composite
|
|
Additional results (Indeo 5.10 AVI videos)
cyy -a-t- cs.washington.edu