Modeling Stylized Character Expressions via Deep Learning
 

Deepali Aneja, Alex Colburn, Gary Faigin, Linda G. Shapiro, Barbara Mones

Results from our combined approach: DeepExpr and Geometry. Leftmost image in each row is the query image and all characters are shown portraying the top match of the same expression - anger, fear, joy, disgust, neutral, sad and surprise (top to bottom).
 

Paper

Deepali Aneja, Alex Colburn, Gary Faigin, Linda G. Shapiro, Barbara Mones. Asian Conference on Computer Vision, Springer (2016) [PDF] [Supplementary PDF]

 

Facial Expression Research Group Database (FERG-DB)

Link to the database page

 

Abstract

We propose DeepExpr, a novel expression transfer approach from humans to multiple stylized characters. We first train two Convolutional Neural Networks to recognize the expression of humans and stylized characters independently. Then we utilize a transfer learning technique to learn the mapping from humans to characters to create a shared embedding feature space. This embedding also allows human expression-based image retrieval and character expression-based image retrieval. We use our perceptual model to retrieve character expressions corresponding to humans. We evaluate our method on a set of retrieval tasks on our collected stylized character dataset of expressions. We also show that the ranking order predicted by the proposed features is highly correlated with the ranking order provided by a facial expression expert and Mechanical Turk experiments.

 

Pipeline

 

Bibtex


@inproceedings{aneja2016modeling,
  title={Modeling Stylized Character Expressions via Deep Learning},
  author={Aneja, Deepali and Colburn, Alex and Faigin, Gary and Shapiro, Linda and Mones, Barbara},
  booktitle={Asian Conference on Computer Vision},
  pages={136--153},
  year={2016},
  organization={Springer}
}