Modeling

Expression generation

Generating complex facial expressions from a 3D face model is usually a difficult task. Traditionally several hundreds parameters would be necessary to precisely control the face. Such complexity makes the task acessible only to seasoned animators. More intuituve parameterizations in term of emotions usually yield a fairly limited range of expressions.

We explored several ways of designing facial expressions from a library of expressions. The main idea is to generate facial expressions by combining them linearly. We combine 3D face models using 3D morphing techniques that:

A combination of faces is specified for each contributing expression as a set of weights, one for each vertex in the model.

Global Blend

The simplest way to combine facial expressions is to assign a single weight or percentage to each contributing expressions. The following example shows the blend of 50% of "surprise" and 50% of "sadness" yielding a "worried" expression.

50 %

"Surprise"


"Worried"
50 %

"Sadness"

Regional Blend

In order to span a broader set of facial expressions, we combine faces in a localized way. We split the faces in several regions that behave in a coherent fashion and assign weights independently to each region. This next exemple shows the design of a "fake smile" by combining the top part of a "neutral" expression with the bottom part of a "joy" expression.


"Fake smile"


Painting interface

To get finer control, we designed a painting interface in which the "colors" are facial expressions. Once an expression is seleced. a 3D brush can be used to modify the blending weights in selected area of the mesh. The strokes are applied directly on the rendering of the current facial blend, which is updated in real-time. The fraction painted has a gradual drop-off and is controlled by the opacity of the brush.

Click on the image to the right for an exemple of this technique.


Results

The following faces where generated from eight initial facial expressions (top left).