Effects of auditory, visual and gestural input on the perceptual learning of tones

Katelyn Eng, Beverly Hannah, Keith Leung, Yue Wang


Research has shown that audio-visual speech information facilitates second language (L2) speech learning, yet multiple input modalities including co-speech gestures show mixed results. While L2 learners may benefit from additional channels of input for processing challenging L2 sounds, multiple resources may also be inhibitory if learners experience excessive cognitive load. The present study examines the use of metaphoric hand gestures in training English perceivers to identify Mandarin tones. Native Mandarin speakers produced tonal stimuli with simultaneous hand gestures mimicking pitch contours in space. The English participants were trained to identify Mandarin tones in one of four modalities: audio-only, (AO), audio-visual (AV, speaker voice and face), audio-gesture (AG, speaker voice and hand gestures) and audio-visual-gesture (AVG). Results show significant improvements in tone identification from pre- to post-training tests across all four training groups, demonstrating that gestural as well as visual articulatory information may facilitate tone perception. However, further analyses with individual tones reveal some group differences. Most noticeably, the AVG group had a slower learning curve during training compared to the other trainee groups for Tone 4, the least accurately identified tone, indicating a negative effect of multiple input modalities on the perception of difficult L2 sounds. In contrast, for Tones 2 and 3, the AG group revealed slower learning effects compared to the AV group, presumably because of the similar gestural trajectories for these two tones, which made the gestural input less distinct. Overall, the results suggest a positive role of gestures in tone identification, one that may also be constrained by phonetic and cognitive demands.


 DOI: 10.21437/SpeechProsody.2014-169

Cite as: Eng, K., Hannah, B., Leung, K., Wang, Y. (2014) Effects of auditory, visual and gestural input on the perceptual learning of tones. Proc. 7th International Conference on Speech Prosody 2014, 905-909, DOI: 10.21437/SpeechProsody.2014-169.


@inproceedings{Eng2014,
  author={Katelyn Eng and Beverly Hannah and Keith Leung and Yue Wang},
  title={{Effects of auditory, visual and gestural input on the perceptual learning of tones}},
  year=2014,
  booktitle={Proc. 7th International Conference on Speech Prosody 2014},
  pages={905--909},
  doi={10.21437/SpeechProsody.2014-169},
  url={http://dx.doi.org/10.21437/SpeechProsody.2014-169}
}