Multi-Level Adaptive Speech Activity Detector for Speech in Naturalistic Environments

Bidisha Sharma, Rohan Kumar Das, Haizhou Li


Speech activity detection (SAD) is a part of many speech processing applications. The traditional SAD approaches use signal energy as the evidence to identify the speech regions. However, such methods perform poorly under uncontrolled environments. In this work, we propose a novel SAD approach using a multi-level decision with signal knowledge in an adaptive manner. The multi-level evidence considered are modulation spectrum and smoothed Hilbert envelope of linear prediction (LP) residual. Modulation spectrum has compelling parallels to the dynamics of speech production and captures information only for the speech component. Contrarily, Hilbert envelope of LP residual captures excitation source aspect of speech. Under uncontrolled scenario, these evidence are found to be robust towards the signal distortions and thus expected to work well. In view of different levels of interference present in the signal, we propose to use a quality factor to control the speech/non-speech decision in an adaptive manner. We refer this method as multi-level adaptive SAD and evaluate on Fearless Steps corpus that is collected during Apollo-11 Mission in naturalistic environments. We achieve a detection cost function of 7.35% with the proposed multi-level adaptive SAD on the evaluation set of Fearless Steps 2019 challenge corpus.


 DOI: 10.21437/Interspeech.2019-1928

Cite as: Sharma, B., Das, R.K., Li, H. (2019) Multi-Level Adaptive Speech Activity Detector for Speech in Naturalistic Environments. Proc. Interspeech 2019, 2015-2019, DOI: 10.21437/Interspeech.2019-1928.


@inproceedings{Sharma2019,
  author={Bidisha Sharma and Rohan Kumar Das and Haizhou Li},
  title={{Multi-Level Adaptive Speech Activity Detector for Speech in Naturalistic Environments}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={2015--2019},
  doi={10.21437/Interspeech.2019-1928},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1928}
}