Interspeech'2005 - Eurospeech

Lisbon, Portugal
September 4-8, 2005

Discrimination of Speech, Musical Instruments and Singing Voices Using the Temporal Patterns of Sinusoidal Segments in Audio Signals

Toru Taniguchi (1), Akishige Adachi (1), Shigeki Okawa (2), Masaaki Honda (1), Katsuhiko Shirai (1)

(1) Waseda University, Tokyo, Japan; (2) Chiba Institute of Technology, Japan

We developed a method for discriminating speech, musical instruments and singing voices based on sinusoidal decomposition of audio signals. Although many studies have been conducted, few have worked on the problem of the temporal overlapping of the categories of sounds. In order to cope with such problems, we used sinusoidal segments with variable lengths as the discrimination units, although most of traditional work has used fixed-length units. The discrimination is based on the temporal characteristics of the sinusoidal segments. We achieved an average discrimination rate of 71.56% in classifying sinusoidal segments in non-mixed audio data. In the time segments, the accuracy 87.9% in non-mixedcategory audio data and 66.4% in 2-mixed-category are achieved. In the comparison of the proposed and the MFCC methods, the effectiveness of temporal features and the importance of the use of both the spectral and temporal characteristics were proved.

Full Paper

Bibliographic reference.  Taniguchi, Toru / Adachi, Akishige / Okawa, Shigeki / Honda, Masaaki / Shirai, Katsuhiko (2005): "Discrimination of speech, musical instruments and singing voices using the temporal patterns of sinusoidal segments in audio signals", In INTERSPEECH-2005, 589-592.