15th Annual Conference of the International Speech Communication Association

September 14-18, 2014

Feature Space Maximum a posteriori Linear Regression for Adaptation of Deep Neural Networks

Zhen Huang (1), Jinyu Li (2), Sabato Marco Siniscalchi (1), I-Fan Chen (1), Chao Weng (1), Chin-Hui Lee (1)

(1) Georgia Institute of Technology, USA
(2) Microsoft, USA

We propose a feature space maximum a posteriori (MAP) linear regression framework to adapt parameters for context dependent deep neural network hidden Markov models (CD-DNN-HMMs). Due to the huge amount of parameters used in DNN acoustic models in large vocabulary continuous speech recognition, the problem of over-fitting can be severe in DNN adaptation, thus often impair the robustness of the adapted DNN model. Linear input network (LIN) as a straight-forward feature space adaptation method for DNN, similar to feature space maximum likelihood linear regression (fMLLR), can potentially suffer from the same robustness situation. The proposed adaptation framework is built based on MAP estimation of the LIN parameters by incorporating prior knowledge into the adaptation process. Experimental results on the Switchboard task show that against the speaker independent CD-DNN-HMM systems, LIN provides 4.28% relative word error rate reduction (WERR) and the proposed fMAPLIN method is able to provide further 1.15% (totally 5.43%) WERR on top of LIN.

Full Paper

Bibliographic reference.  Huang, Zhen / Li, Jinyu / Siniscalchi, Sabato Marco / Chen, I-Fan / Weng, Chao / Lee, Chin-Hui (2014): "Feature space maximum a posteriori linear regression for adaptation of deep neural networks", In INTERSPEECH-2014, 2992-2996.