Ninth International Conference on Spoken Language Processing

Pittsburgh, PA, USA
September 17-21, 2006

CASA Based Speech Separation for Robust Speech Recognition

Runqiang Han, Pei Zhao, Qin Gao, Zhiping Zhang, Hao Wu, Xihong Wu

Peking University, China

This paper introduces a speech separation system as a front-end processing step for automatic speech recognition (ASR). It employs computational auditory scene analysis (CASA) to separate the target speech from the interference speech. Specifically, the mixed speech is preprocessed based on auditory peripheral model. Then a pitch tracking is conducted and the dominant pitch is used as a main cue to find the target speech. Next, the time frequency (TF) units are merged into many segments. These segments are then combined into streams via CASA initial grouping. A regrouping strategy is employed to refine these streams via amplitude modulate (AM) cues, which are finally organized by the speaker recognition techniques into corresponding speakers. Finally, the output streams are reconstructed to compensate the missing data in the abovementioned processing steps by a cluster based feature reconstruction. Experimental results of ASR show that at low TMR (<-6dB) the proposed method offers significantly higher recognition accuracy.

Full Paper

Bibliographic reference.  Han, Runqiang / Zhao, Pei / Gao, Qin / Zhang, Zhiping / Wu, Hao / Wu, Xihong (2006): "CASA based speech separation for robust speech recognition", In INTERSPEECH-2006, paper 2068-Mon1WeS.2.