Deep Activation Mixture Model for Speech Recognition

Chunyang Wu, Mark J.F. Gales

Deep learning approaches achieve state-of-the-art performance in a range of applications, including speech recognition. However, the parameters of the deep neural network (DNN) are hard to interpret, which makes regularisation and adaptation to speaker or acoustic conditions challenging. This paper proposes the deep activation mixture model (DAMM) to address these problems. The output of one hidden layer is modelled as the sum of a mixture and residual models. The mixture model forms an activation function contour while the residual one models fluctuations around the contour. The use of the mixture model gives two advantages: First, it introduces a novel regularisation on the DNN. Second, it allows novel adaptation schemes. The proposed approach is evaluated on a large-vocabulary U.S. English broadcast news task. It yields a slightly better performance than the DNN baselines, and on the utterance-level unsupervised adaptation, the adapted DAMM acquires further performance gains.

 DOI: 10.21437/Interspeech.2017-1233

Cite as: Wu, C., Gales, M.J. (2017) Deep Activation Mixture Model for Speech Recognition. Proc. Interspeech 2017, 1611-1615, DOI: 10.21437/Interspeech.2017-1233.

  author={Chunyang Wu and Mark J.F. Gales},
  title={Deep Activation Mixture Model for Speech Recognition},
  booktitle={Proc. Interspeech 2017},