Auditory-Visual Speech Processing
(AVSP 2001)

September 7-9, 2001
Aalborg, Denmark

Analysis of Audio-Video Correlation in Vowels in Australian English

Roland Goecke (1), J. Bruce Millar (1), Alexander Zelinsky (2), Jordi Robert-Ribes (3)

(1) Computer Sciences Laboratory and (2) Robotic Systems Laboratory, Research School of Information Sand Engineering, Australian National University, Canberra, Australia and Engineering, Australian National University, Canberra, Australia
(3) Cable & Wireless Optus, North Sydney, NSW, Australia

This paper investigates the statistical relationship between acoustic and visual speech features for vowels. We extract such features from our stereo vision AV speech data corpus of Australian English. A principal component analysis is performed to determine which data points of the parameter curve for each feature are the most important ones to represent the shape of each curve. This is followed by a canonical correlation analysis to determine which principal components, and hence which data points of which features, correlate most across the two modalities. Several strong correlations are reported between acoustic and visual features. In particular, F1 and F2 and mouth height were strongly correlated. Knowledge about the correlation of acoustic and visual features can be used to predict the presence of acoustic features from visual features in order to improve the recognition rate of automatic speech recognition systems in environments with acoustic noise.


Full Paper

Bibliographic reference.  Goecke, Roland / Millar, J. Bruce / Zelinsky, Alexander / Robert-Ribes, Jordi (2001): "Analysis of audio-video correlation in vowels in Australian English", In AVSP-2001, 115-120.