INTERSPEECH 2010
11th Annual Conference of the International Speech Communication Association

Makuhari, Chiba, Japan
September 26-30. 2010

On Speaker Adaptive Training of Artificial Neural Networks

Jan Trmal, Jan Zelinka, Luděk Müller

University of West Bohemia, Czech Republic

In the paper we present two techniques improving the recognition accuracy of multilayer perceptron neural networks (MLP ANN) by means of adopting Speaker Adaptive Training. The use of the MLP ANN, usually in combination with the TRAPS parametrization, includes applications in speech recognition tasks, discriminative features production and other. In the first SAT experiments, we used the VTLN as a speaker normalization technique. Moreover, we developed a novel speaker normalization technique called Minimum Error Linear Transform (MELT) that resembles the cMLLR/fMLLR method with respect to the possible application either on the model or features. We tested these two methods extensively on telephone speech corpus SpeechDat-East. The results obtained in these experiments suggest that incorporation of SAT into MLP ANN training process is beneficial and depending on the setup it leads to significant decrease of phoneme error rate (3 % - 8 % absolute, 12 % - 25 % relative).

Full Paper

Bibliographic reference.  Trmal, Jan / Zelinka, Jan / Müller, Luděk (2010): "On speaker adaptive training of artificial neural networks", In INTERSPEECH-2010, 554-557.