Machine Listening in Multisource Environments (CHiME) 2011

Florence, Italy
September 1, 2011

Using the FASST Source Separation Toolbox for Noise Robust Speech Recognition

Alexey Ozerov, Emmanuel Vincent

INRIA, Centre de Rennes - Bretagne Atlantique, France

We describe our submission to the 2011 CHiME Speech Separation and Recognition Challenge. Our speech separation algorithm was built using the Flexible Audio Source Separation Toolbox (FASST) we developed recently. This toolbox is an implementation of a general flexible framework based on a library of structured source models that enable the incorporation of prior knowledge about a source separation problem via userspecifiable constraints. We show how to use FASST to develop an efficient speech separation algorithm for the CHiME dataset. We also describe the acoustic model training and adaptation strategies we used for this submission. Altogether, as compared to the baseline system, we obtain an improvement of keyword recognition accuracies in all conditions. The best improvement of about 40%is achieved in the worst condition of -6 dB Signalto- Noise-Ratio (SNR), where 18 % of this improvement is due to the speech separation. The improvement decreases when the SNR increases. These results indicate that audio source separation can be very helpful to improve speech recognition in noisy or multi-source environments.

Index Terms. speech separation, source separation, general flexible framework, noise robust speech recognition

Full Paper

Bibliographic reference.  Ozerov, Alexey / Vincent, Emmanuel (2011): "Using the FASST source separation toolbox for noise robust speech recognition", In CHiME-2011, 86-87.