Speech and music discrimination: Human detection of differences between music and speech based on rhythm

Madeleine Stanev, Johannes Redlich, Christian Knörzer, Ninett Rosenfeld, Athanasios Lykartsis


Rhythm in speech and singing forms one of its basic acoustic components. Therefore, it is interesting to investigate the capability of subjects to distinguish between speech and singing when only the rhythm remains as an acoustic cue. For this study we developed a method to eliminate all linguistic components but rhythm from the speech and singing signals. The study was conducted online and participants could listen to the stimuli via loudspeakers or headphones. The analysis of the survey shows that people are able to significantly discriminate between speech and singing after they have been altered. Furthermore, our results reveal specific features, which supported participants in their decision, such as differences in regularity and tempo between singing and speech samples. The hypothesis that music trained people perform more successfully on the task was not proved. The results of the study are important for the understanding of the structure of and differences between speech and singing, for the use in further studies and for future application in the field of speech recognition.


DOI: 10.21437/SpeechProsody.2016-46

Cite as

Stanev, M., Redlich, J., Knörzer, C., Rosenfeld, N., Lykartsis, A. (2016) Speech and music discrimination: Human detection of differences between music and speech based on rhythm. Proc. Speech Prosody 2016, 222-226.

Bibtex
@inproceedings{Stanev+2016,
author={Madeleine Stanev and Johannes Redlich and Christian Knörzer and Ninett Rosenfeld and Athanasios Lykartsis},
title={Speech and music discrimination: Human detection of differences between music and speech based on rhythm},
year=2016,
booktitle={Speech Prosody 2016},
doi={10.21437/SpeechProsody.2016-46},
url={http://dx.doi.org/10.21437/SpeechProsody.2016-46},
pages={222--226}
}