Classification of Nonverbal Human Produced Audio Events: A Pilot Study

Rachel E. Bouserhal, Philippe Chabot, Milton Sarria-Paja, Patrick Cardinal, Jérémie Voix


The accurate classification of nonverbal human produced audio events opens the door to numerous applications beyond health monitoring. Voluntary events, such as tongue clicking and teeth chattering, may lead to a novel way of silent interface command. Involuntary events, such as coughing and clearing the throat, may advance the current state-of-the-art in hearing health research. The challenge of such applications is the balance between the processing capabilities of a small intra-aural device and the accuracy of classification. In this pilot study, 10 nonverbal audio events are captured inside the ear canal blocked by an intra-aural device. The performance of three classifiers is investigated: Gaussian Mixture Model (GMM), Support Vector Machine and Multi-Layer Perceptron. Each classifier is trained using three different feature vector structures constructed using the mel-frequency cepstral (MFCC) coefficients and their derivatives. Fusion of the MFCCs with the auditory-inspired amplitude modulation features (AAMF) is also investigated. Classification is compared between binaural and monaural training sets as well as for noisy and clean conditions. The highest accuracy is achieved at 75.45% using the GMM classifier with the binaural MFCC+AAMF clean training set. Accuracy of 73.47% is achieved by training and testing the classifier with the binaural clean and noisy dataset.


 DOI: 10.21437/Interspeech.2018-2299

Cite as: Bouserhal, R.E., Chabot, P., Sarria-Paja, M., Cardinal, P., Voix, J. (2018) Classification of Nonverbal Human Produced Audio Events: A Pilot Study. Proc. Interspeech 2018, 1512-1516, DOI: 10.21437/Interspeech.2018-2299.


@inproceedings{Bouserhal2018,
  author={Rachel E. Bouserhal and Philippe Chabot and Milton Sarria-Paja and Patrick Cardinal and Jérémie Voix},
  title={Classification of Nonverbal Human Produced Audio Events: A Pilot Study},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={1512--1516},
  doi={10.21437/Interspeech.2018-2299},
  url={http://dx.doi.org/10.21437/Interspeech.2018-2299}
}