14thAnnual Conference of the International Speech Communication Association

Lyon, France
August 25-29, 2013

Speech Activity Detection on YouTube Using Deep Neural Networks

Neville Ryant, Mark Liberman, Jiahong Yuan

Linguistic Data Consortium, USA

Speech activity detection (SAD) is an important first step in speech processing. Commonly used methods (e.g., frame-level classification using gaussian mixture models (GMMs)) work well under stationary noise conditions, but do not generalize well to domains such as YouTube, where videos may exhibit a diverse range of environmental conditions. One solution is to augment the conventional cepstral features with additional, hand-engineered features (e.g., spectral flux, spectral centroid, multiband spectral entropies) which are robust to changes in environment and record- ing condition. An alternative approach, explored here, is to learn robust features during the course of training using an appropriate architecture such as deep neural networks (DNNs). In this paper we demonstrate that a DNN with input consisting of multiple frames of mel frequency cepstral coefficients (MFCCs) yields drastically lower frame-wise error rates (19.6%) on YouTube videos compared to a conventional GMM based system (40%).

Full Paper

Bibliographic reference.  Ryant, Neville / Liberman, Mark / Yuan, Jiahong (2013): "Speech activity detection on youtube using deep neural networks", In INTERSPEECH-2013, 728-731.