Imagine taking a selfie video with your mobile phone and getting as
output a 3D model of your head (face and 3D hair strands) that can be
later used in VR, AR, and any other domain. State of the art hair
reconstruction methods allow either a single photo (thus compromising 3D
quality), or multiple views but require manual user interaction (manual
hair segmentation and capture of fixed camera views that span full 360
angles). In this paper, we present a system that can create a
reconstruction from any video (even a selfie video), completely
automatically, and we don't require specific views since taking your -90
degree, 90 degree, and full back views is not feasible in a selfie
capture.
In the core of our system, in addition to the automatization components,
hair strands are estimated and deformed in 3D (rather than 2D as in
state of the art) thus enabling superior results. We present
qualitative, quantitative, and Mechanical Turk human studies that
support the proposed system, and show results on diverse variety of
videos (8 different celebrity videos, 9 selfie mobile videos, spanning
age, gender, hair length, type, and styling).
Paper pdf, to appear in Siggraph Asia 2018!
@ARTICLE{2018arXiv180904765L, author = {{Liang}, S. and {Huang}, X. and {Meng}, X. and {Chen}, K. and {Shapiro}, L.~G. and {Kemelmacher-Shlizerman}, I.}, title = "{Video to Fully Automatic 3D Hair Model}", journal = {ArXiv e-prints}, archivePrefix = "arXiv", eprint = {1809.04765}, primaryClass = "cs.CV", keywords = {Computer Science - Computer Vision and Pattern Recognition, Computer Science - Graphics}, year = 2018, month = sep, adsurl = {http://adsabs.harvard.edu/abs/2018arXiv180904765L}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} }