This paper describes how speech recognition in the presence of F-16 jet cockpit noise can be performed using a sequence of three units - an auditory model and two neural models. A method for noise reduction in the cepstral domian based on a self-structuring universal approximates is proposed and tested on a large database of isolated words contaminated with jet noise. This approach is a potential alternative to traditional recognition methods for noisy speech and makes noise reduction possible in all three models as in the system in [1]. The first model performs a spectral analysis of the input speech signal. The second model is a Self-structuring Neural Noise Reduction (SNNR) model, which is an alternative to the noise reduction model [1] presented at ICASSP91. The noise reduced output from the SNNR network is propagated through the speech recognizer consisting of a set of Hidden Control Neural Networks (HCNN).
Cite as: Sorensen, H.B.D., Hartmann, U. (1991) A self-structuring neural noise reduction model. Proc. 2nd European Conference on Speech Communication and Technology (Eurospeech 1991), 567-570, doi: 10.21437/Eurospeech.1991-141
@inproceedings{sorensen91_eurospeech, author={Helge B. D. Sorensen and Uwe Hartmann}, title={{A self-structuring neural noise reduction model}}, year=1991, booktitle={Proc. 2nd European Conference on Speech Communication and Technology (Eurospeech 1991)}, pages={567--570}, doi={10.21437/Eurospeech.1991-141} }