Auditory-Visual Speech Processing (AVSP'98)

December 4-6, 1998
Terrigal - Sydney, Australia

3-D Face Point Trajectory Synthesis Using An Automatically Derived Visual Phoneme Similarity Matrix

Levent M. Arslan, David Talkin

Entropic Inc. (USA)

This paper presents a novel algorithm which generates three-dimensional face point trajectories for a given speech file with or without its text. The proposed algorithm first employs an off-line training phase. In this phase, recorded face point trajectories along with their speech data and phonetic labels are used to generate phonetic codebooks. These codebooks consist of both acoustic and visual features. Acoustics are represented by line spectral frequencies (LSF), and face points are represented with their principal components (PC). During the synthesis stage, speech input is rated in terms of its similarity to the codebook entries. Based on the similarity, each codebook entry is assigned a weighting coefficient. If the phonetic information about the test speech is available, this is utilized in restricting the codebook search to only several codebook entries which are visually closest to the current phoneme (a visual phoneme similarity matrix is generated for this purpose). Then these weights are used to synthesize the principal components of the face point trajectory. The performance of the algorithm is tested on held-out data, and the synthesized face point trajectories showed a correlation of 0.73 with true face point trajectories.


Full Paper (from TeX)   Full Paper (from scan)

Bibliographic reference.  Arslan, Levent M. / Talkin, David (1998): "3-D face point trajectory synthesis using an automatically derived visual phoneme similarity matrix", In AVSP-1998, 175-180.