This paper describes an automatic speech recognition front-end that combines low-level robust ASR feature extraction techniques, and higher-level linear and non-linear feature transformations. The low-level algorithms use data-derived filters, mean and variance normalization of the feature vectors, and dropping of noise frames. The feature vectors are then linearly transformed using Principal Components Analysis (PCA). An Artificial Neural Network (ANN) is also used to compute features that are useful for classification of speech sounds. It is trained for phoneme probability estimation on a large corpus of noisy speech. These transformations lead to two feature streams whose vectors are concatenated and then used for speech recognition. This method was tested on the set of speech corpora used for the Aurora evaluation. Using the feature stream generated without the ANN yields an overall 41% reduction of the error rate over Mel-Frequency Cepstral Coefficients (MFCC) reference features. Adding the ANN stream further reduces the error rate yielding a 46% reduction over the reference features.
Cite as: Benitez, M.C., Burget, L., Chen, B., Dupont, S., Garudadri, H., Hermansky, H., Jain, P., Kajarekar, S., Morgan, N., Sivadas, S. (2001) Robust ASR front-end using spectral-based and discriminant features: experiments on the Aurora tasks. Proc. 7th European Conference on Speech Communication and Technology (Eurospeech 2001), 429-432, doi: 10.21437/Eurospeech.2001-115
@inproceedings{benitez01_eurospeech, author={M. Carmen Benitez and Lukas Burget and Barry Chen and Stephane Dupont and Hari Garudadri and Hynek Hermansky and Pratibha Jain and Sachin Kajarekar and Nelson Morgan and Sunil Sivadas}, title={{Robust ASR front-end using spectral-based and discriminant features: experiments on the Aurora tasks}}, year=2001, booktitle={Proc. 7th European Conference on Speech Communication and Technology (Eurospeech 2001)}, pages={429--432}, doi={10.21437/Eurospeech.2001-115} }