Interspeech'2005 - Eurospeech

Lisbon, Portugal
September 4-8, 2005

Robust Distant Speech Recognition Based on Position Dependent CMN Using a Novel Multiple Microphone Processing Technique

Longbiao Wang, Norihide Kitaoka, Seiichi Nakagawa

Toyohashi University of Technology, Japan

In a distant environment, channel distortion may drastically degrade speech recognition performances. In this paper, we propose a robust multiple microphone speech processing approach based on position dependent Cepstral Mean Normalization (CMN). In the training stage, the system measures the transmission characteristics according to the speaker positions from some grid points in the room and estimated the compensation parameters a priori. In the recognition stage, the system estimates the speaker position and adopts the estimated compensation parameters corresponding to the estimated position, and then the system applies the CMN to the speech and performs speech recognition for each microphone. Finally, the maximum vote or the maximum summation likelihood of whole channels (that is, multiple microphones) is used to obtain the final result. In our proposed method, we use utterances emitted from a loudspeaker located at various positions to estimate compensation parameters for a convenient sake, and we also compensate the mismatch between the cepstral means of utterances spoken by human and those emitted from the loudspeaker. Our experiments showed that the proposed method improved the performances of speech recognition system in a distant environment efficiently and it could also compensate the mismatch between voices from human and loudspeaker well.

Full Paper

Bibliographic reference.  Wang, Longbiao / Kitaoka, Norihide / Nakagawa, Seiichi (2005): "Robust distant speech recognition based on position dependent CMN using a novel multiple microphone processing technique", In INTERSPEECH-2005, 2661-2664.