ISCA Archive Interspeech 2017
ISCA Archive Interspeech 2017

An ‘End-to-Evolution’ Hybrid Approach for Snore Sound Classification

Michael Freitag, Shahin Amiriparian, Nicholas Cummins, Maurice Gerczuk, Björn Schuller

Whilst snoring itself is usually not harmful to a person’s health, it can be an indication of Obstructive Sleep Apnoea (OSA), a serious sleep-related disorder. As a result, studies into using snoring as acoustic based marker of OSA are gaining in popularity. Motivated by this, the INTERSPEECH 2017 ComParE Snoring sub-challenge requires classification from which areas in the upper airways different snoring sounds originate. This paper explores a hybrid approach combining evolutionary feature selection based on competitive swarm optimisation and deep convolutional neural networks (CNN). Feature selection is applied to novel deep spectrum features extracted directly from spectrograms using pre-trained image classification CNN. Key results presented demonstrate that our hybrid approach can substantially increase the performance of a linear support vector machine on a set of low-level features extracted from the Snoring sub-challenge data. Even without subset selection, the deep spectrum features are sufficient to outperform the challenge baseline, and competitive swarm optimisation further improves system performance. In comparison to the challenge baseline, unweighted average recall is increased from 40.6% to 57.6% on the development partition, and from 58.5% to 66.5% on the test partition, using 2246 of the 4096 deep spectrum features.

doi: 10.21437/Interspeech.2017-173

Cite as: Freitag, M., Amiriparian, S., Cummins, N., Gerczuk, M., Schuller, B. (2017) An ‘End-to-Evolution’ Hybrid Approach for Snore Sound Classification. Proc. Interspeech 2017, 3507-3511, doi: 10.21437/Interspeech.2017-173

  author={Michael Freitag and Shahin Amiriparian and Nicholas Cummins and Maurice Gerczuk and Björn Schuller},
  title={{An ‘End-to-Evolution’ Hybrid Approach for Snore Sound Classification}},
  booktitle={Proc. Interspeech 2017},