15th Annual Conference of the International Speech Communication Association

September 14-18, 2014

Noise Spectrum Estimation Using Gaussian Mixture Model-Based Speech Presence Probability for Robust Speech Recognition

M. J. Alam (1), Patrick Kenny (1), Pierre Dumouchel (2), Douglas O'Shaughnessy (3)

(1) CRIM, Canada
(2) École de Technologie Supérieure, Canada
(3) INRS-EMT, Canada

This work presents a noise spectrum estimator based on the Gaussian mixture model (GMM)-based speech presence probability (SPP) for robust speech recognition. Estimated noise spectrum is then used to compute a subband a posteriori signal-to-noise ratio (SNR). A sigmoid shape weighting rule is formed based on this subband a posteriori SNR to enhance the speech spectrum in the auditory domain, which is used in the Mel-frequency cepstral coefficient (MFCC) framework for robust feature, denoted here as Robust MFCC (RMFCC) extraction. The performance of the GMM-SPP noise spectrum estimator-based RMFCC feature extractor is evaluated in the context of speech recognition on the AURORA-4 continuous speech recognition task. For comparison we incorporate six existing noise estimation methods into this auditory domain spectrum enhancement framework. The ETSI advanced front-end (ETSI-AFE), power normalized cepstral coefficients (PNCC), and robust compressive gammachirp cepstral coefficients (RCGCC) are also considered for comparison purposes. Experimental speech recognition results show that, in terms of word accuracy, RMFCC provides an average relative improvements of 8.1%, 6.9% and 6.6% over RCGCC, ETSI-AFE, and PNCC, respectively. With GMM-SPP-based noise estimation method an average relative improvement of 3.6% is obtained over other six noise estimation methods in terms of word recognition accuracy.

Full Paper

Bibliographic reference.  Alam, M. J. / Kenny, Patrick / Dumouchel, Pierre / O'Shaughnessy, Douglas (2014): "Noise spectrum estimation using Gaussian mixture model-based speech presence probability for robust speech recognition", In INTERSPEECH-2014, 2759-2763.