ISCA Archive AVSP 2019
ISCA Archive AVSP 2019

Auditory-Visual Speech Segmentation in Infants

S. H. Jessica Tan, Denis Burnham

Speech segmentation, breaking the heard speech stream into words, is necessary for language acquisition. Visual prosody, like acoustic prosody, aids speech segmentation in adults [1], [2]. By contrast, surprisingly little is known about how visual speech information influences speech segmentation in infants despite the important role that speech segmentation plays in language development and past research demonstrating that young infants can segment auditory-only speech. Further, studies on infants’ gaze behavior to the eye and mouth regions of the speaker’s face have found that infants perceive the mouth region as an important conveyor of articulatory information [3]. Such evidence suggests two hypotheses: (i) that infants should benefit from visual speech information in word segmentation, and (ii) any visual speech benefit should be related to greater gaze directed to the speaker’s mouth than the eyes. This study investigated whether (1) 7.5-month-old infants’ speech segmentation differed between auditory-only and auditoryvisual conditions, and (2) gaze behavior modulated segmentation performance. Preliminary analyses reveal better segmentation performance in the auditory-visual condition that may be accounted for by greater attention on the speaker’s mouth.

doi: 10.21437/AVSP.2019-9

Cite as: Tan, S.H.J., Burnham, D. (2019) Auditory-Visual Speech Segmentation in Infants. Proc. The 15th International Conference on Auditory-Visual Speech Processing, 43-46, doi: 10.21437/AVSP.2019-9

  author={S. H. Jessica Tan and Denis Burnham},
  title={{Auditory-Visual Speech Segmentation in Infants}},
  booktitle={Proc. The 15th International Conference on Auditory-Visual Speech Processing},