End-to-End Adaptation with Backpropagation Through WFST for On-Device Speech Recognition System

Emiru Tsunoo, Yosuke Kashiwagi, Satoshi Asakawa, Toshiyuki Kumakura


An on-device DNN-HMM speech recognition system efficiently works with a limited vocabulary in the presence of a variety of predictable noise. In such a case, vocabulary and environment adaptation is highly effective. In this paper, we propose a novel method of end-to-end (E2E) adaptation, which adjusts not only an acoustic model (AM) but also a weighted finite-state transducer (WFST). We convert a pretrained WFST to a trainable neural network and adapt the system to target environments/vocabulary by E2E joint training with an AM. We replicate Viterbi decoding with forward-backward neural network computation, which is similar to recurrent neural networks (RNNs). By pooling output score sequences, a vocabulary posterior for each utterance is obtained and used for discriminative loss computation. Experiments using 2–10 hours of English/Japanese adaptation datasets indicate that the fine-tuning of only WFSTs and that of only AMs are both comparable to a state-of-the-art adaptation method, and E2E joint training of the two components achieves the best recognition performance. We also adapt each language system to the other language using the adaptation data, and the results show that the proposed method also works well for language adaptations.


 DOI: 10.21437/Interspeech.2019-1880

Cite as: Tsunoo, E., Kashiwagi, Y., Asakawa, S., Kumakura, T. (2019) End-to-End Adaptation with Backpropagation Through WFST for On-Device Speech Recognition System. Proc. Interspeech 2019, 764-768, DOI: 10.21437/Interspeech.2019-1880.


@inproceedings{Tsunoo2019,
  author={Emiru Tsunoo and Yosuke Kashiwagi and Satoshi Asakawa and Toshiyuki Kumakura},
  title={{End-to-End Adaptation with Backpropagation Through WFST for On-Device Speech Recognition System}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={764--768},
  doi={10.21437/Interspeech.2019-1880},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1880}
}