The present study examined the relative contributions of prosody and semantic context in the implicit processing of emotions from spoken language. In three separate tasks, we compared the degree to which happy and sad emotional prosody alone, emotional semantic context alone, and combined emotional prosody and semantic information would prime subsequent decisions about an emotionally congruent or incongruent facial expression. In all three tasks, we observed a congruency effect, whereby prosodic or semantic features of the prime facilitated decisions about emotionally-congruent faces. However, the extent of this priming was similar in the three tasks. Our results imply that prosody and semantic cues hold similar potential to activate emotion-related knowledge in memory when they are implicitly processed in speech, due to underlying connections in associative memory shared by prosody, semantics, and facial displays of emotion.
Cite as: Pell, M.D., Jaywant, A., Monetta, L., Kotz, S.A. (2010) The contributions of prosody and semantic context in emotional speech processing. Proc. Speech Prosody 2010, paper 032
@inproceedings{pell10_speechprosody, author={Marc D. Pell and Abhishek Jaywant and Laura Monetta and Sonja A. Kotz}, title={{The contributions of prosody and semantic context in emotional speech processing}}, year=2010, booktitle={Proc. Speech Prosody 2010}, pages={paper 032} }