Auditory-Visual Speech Processing 2007 (AVSP2007)

Kasteel Groenendaal, Hilvarenbeek, The Netherlands
August 31 - September 3, 2007

Objective Viseme Extraction and Audiovisual Uncertainty: Estimation Limits between Auditory and Visual Modes

Javier Melenchón, Jordi Simó, Germán Cobo, Elisa Martínez

Communications and Signal Theory Department, Enginyeria i Arquitectura La Salle, Universitat Ramon Llull, Barcelona, Spain

An objective way to obtain consonant visemes for any given Spanish speaking person is proposed. Its face is recorded while speaking a balanced set of sentences and stored as an audiovisual sequence. Visual and auditory modes are segmented by allophones and a distance matrix is built to find visually similar perceived allophones. Results show high correlation with tedious subjective earlier evaluations regardless of being in English. In addition, estimation between modes is also studied, revealing a tradeoff between performances in both modes: given a set of auditory groups and another of visual ones for each grouping criteria, increasing the estimation performance of one mode is translated to decreasing that of the other one. Moreover, the tradeoff is very similar (<7% between maximum and minimum values) in all observed examples.

Full Paper

Bibliographic reference.  Melenchón, Javier / Simó, Jordi / Cobo, Germán / Martínez, Elisa (2007): "Objective viseme extraction and audiovisual uncertainty: estimation limits between auditory and visual modes", In AVSP-2007, paper P13.