We address the problem of single photo age progression and regression---the prediction of how a person might look in the future, or how they looked in the past. Most existing aging methods are limited to changing the texture, overlooking transformations in head shape that occur during the human aging and growth process. This limits the applicability of previous methods to aging of adults to slightly older adults, and application of those methods to photos of children does not produce quality results. We propose a novel multi-domain image-to-image generative adversarial network architecture, whose learned latent space models a continuous bi-directional aging process. The network is trained on the FFHQ dataset, which we labeled for ages, gender, and semantic segmentation. Fixed age classes are used as anchors to approximate continuous age transformation. Our framework can predict a full head portrait for ages 0--70 from a single photo, modifying both texture and shape of the head. We demonstrate results on a wide variety of photos and datasets, and show significant improvement over the state of the art.
@inproceedings{orel2020lifespan,
title={Lifespan Age Transformation Synthesis},
author={Or-El, Roy and
Sengupta, Soumyadip and
Fried, Ohad and
Shechtman, Eli and
Kemelmacher-Shlizerman, Ira},
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
year={2020}}
Acknowledgements
We wish to thank Xuan Luo and Aaron Wetzler for their valuable discussions and
advice, and Thevina Dokka for her help in building the FFHQ-Aging dataset.
This work was supported in part by Futurewei Technologies. Ohad Fried was supported
by the Brown Institute for Media Innovation.