Auditory-Visual Speech Processing (AVSP) 2010

Hakone, Kanagawa, Japan
September 30-October 3, 2010

Building Speaker-Specific Lip Models for Talking Heads from 3D Face Data

Takaaki Kuratate (1,2), Marcia Riley (1)

(1) Institute for Cognitive Systems, Technical University Munich, Germany
(2) MARCS Auditory Laboratories, University of Western Sydney, Australia

When creating realistic talking head animations, accurate modeling of speech articulators is important for speech perceptibility. Previous lip modeling methods such as simple numerical lip modeling focus on creating a general lip model without incorporating lip speaker variations. Here we present a method for creating accurate speaker-specific lip representations that retain the individual characteristics of a speaker’s lips via an adaptive numerical approach using 3D scanned surface and MRI data. By adjusting spline parameters automatically to minimize the error between node points of the lip model and the raw 3D surface, new 3D lips are created efficiently and easily. The resulting lip models will be used in our talking head animation system to evaluate auditory-visual speech perception, and to analyze our 3D face database for statistically relevant lip features.

Full Paper

Bibliographic reference.  Kuratate, Takaaki / Riley, Marcia (2010): "Building speaker-specific lip models for talking heads from 3d face data", In AVSP-2010, paper P9.