INTERSPEECH 2014
15th Annual Conference of the International Speech Communication Association

Singapore
September 14-18, 2014

Noise Robust Speech Recognition Based on Noise-Adapted HMMs Using Speech Feature Compensation

Yong-Joo Chung

Keimyung University, Korea

In conventional VTS-based noisy speech recognition methods, the parameters of the clean speech HMM are adapted to test noisy speech, or the original clean speech is estimated from the test noisy speech. However, in noisy speech recognition, improved performance is generally expected by employing noisy acoustic models produced by methods such as Multi-condition TRaining (MTR) and Multi-Model based Speech Recognition (MMSR) framework compared with using clean HMMs. Motivated by this idea, a method has been developed that can make use of the noisy acoustic models in the VTS algorithm where additive noise was adapted for the speech feature compensation. In this paper, we modified the previous method to adapt channel noise as well as additive noise. The proposed method was applied to noise-adapted HMMs trained by the MTR and MMSR and could reduce the relative word error rate by 6.5% and 7.2%, respectively, in the noisy speech recognition experiments on the Aurora 2 database.

Full Paper

Bibliographic reference.  Chung, Yong-Joo (2014): "Noise robust speech recognition based on noise-adapted HMMs using speech feature compensation", In INTERSPEECH-2014, 2754-2758.