ISCA Archive Interspeech 2017
ISCA Archive Interspeech 2017

Emotional Thin-Slicing: A Proposal for a Short- and Long-Term Division of Emotional Speech

Daniel Oliveira Peres, Dominic Watt, Waldemar Ferreira Netto

Human listeners are adept at successfully recovering linguistically- and socially-relevant information from very brief utterances. Studies using the ‘thin-slicing’ approach show that accurate judgments of the speaker’s emotional state can be made from minimal quantities of speech. The present experiment tested the performance of listeners exposed to thin-sliced samples of spoken Brazilian Portuguese selected to exemplify four emotions ( anger, fear, sadness, happiness). Rather than attaching verbal labels to the audio samples, participants were asked to pair the excerpts with averaged facial images illustrating the four emotion categories. Half of the listeners were native speakers of Brazilian Portuguese, while the others were native English speakers who knew no Portuguese. Both groups of participants were found to be accurate and consistent in assigning the audio samples to the expected emotion category, but some emotions were more reliably identified than others. Fear was misidentified most frequently. We conclude that the phonetic cues to speakers’ emotional states are sufficiently salient and differentiated that listeners need only a few syllables upon which to base judgments, and that as a species we owe our perceptual sensitivity in this area to the survival value of being able to make rapid decisions concerning the psychological states of others.


doi: 10.21437/Interspeech.2017-1719

Cite as: Peres, D.O., Watt, D., Netto, W.F. (2017) Emotional Thin-Slicing: A Proposal for a Short- and Long-Term Division of Emotional Speech. Proc. Interspeech 2017, 591-595, doi: 10.21437/Interspeech.2017-1719

@inproceedings{peres17_interspeech,
  author={Daniel Oliveira Peres and Dominic Watt and Waldemar Ferreira Netto},
  title={{Emotional Thin-Slicing: A Proposal for a Short- and Long-Term Division of Emotional Speech}},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={591--595},
  doi={10.21437/Interspeech.2017-1719}
}