Auditory-Visual Speech Processing (AVSP) 2013

Annecy, France
August 29 - September 1, 2013

Acoustic and Visual Adaptations in Speech Produced to Counter Adverse Listening Conditions

Valerie Hazan (1), Jeesun Kim (2)

(1) Department of Speech Hearing and Phonetic Sciences, University College London, London, UK
(2) The MARCS Institute, University of Western Sydney, Sydney, Australia

This study investigated whether communication modality affects talkers’ speech adaptation to an interlocutor exposed to background noise. It was predicted that adaptations to lip gestures would be greater and acoustic ones reduced when communicating face-to-face. We video recorded 14 Australian-English talkers (Talker A) speaking in a face-toface or auditory only setting with their interlocutors who were either in quiet or noise. Focusing on keyword productions, acoustic-phonetic adaptations were examined via measures of vowel intensity, pitch, keyword duration, vowel F1/F2 space and VOT, and visual adaptations via measures of vowel interlip area. The interlocutor adverse listening conditions lead Talker A to reduce speech rate, increase pitch and expand vowel space. These adaptations were not significantly reduced in the face-to-face setting although there was a trend for a smaller degree of vowel space expansion than in the auditory only setting. Visible lip gestures were more enhanced overall in the face-to-face setting, but also increased in the auditory only setting when countering the effects of noise. This study therefore showed only small effects of communication modality on speech adaptations.

Index Terms: speech adaptation, audiovisual communication, speech in noise

Full Paper

Bibliographic reference.  Hazan, Valerie / Kim, Jeesun (2013): "Acoustic and visual adaptations in speech produced to counter adverse listening conditions", In AVSP-2013, 93-98.