![]() |
EUROSPEECH 2003 - INTERSPEECH 2003
|
![]() |
The use of prior knowledge in machine learning techniques has been proved to give better generalisation performance for unseen data. However, this idea has not been investigated so far for robust ASR. Training several related tasks simultaneously is also called multitask learning (MTL): the extra tasks effectively incorporate prior knowledge. In this work we present an application of MTL in robust ASR. We have used an RNN architecture to integrate classification and enhancement of noisy speech in an MTL framework. Enhancement is used as an extra task to get higher recognition performance on unseen data. We report our results on an isolated word recognition task. The reduction in error rate relative to multicondition training with HMMs for subway, babble, car and exhibition noises was 53.37%, 21.99%, 37.01% and 44.13% respectively.
Bibliographic reference. Parveen, Shahla / Green, Phil (2003): "Multitask learning in connectionist robust ASR using recurrent neural networks", In EUROSPEECH-2003, 1813-1816.