Improving Mandarin Tone Recognition Using Convolutional Bidirectional Long Short-Term Memory with Attention

Longfei Yang, Yanlu Xie, Jinsong Zhang


Automatic tone recognition is useful for Mandarin spoken language processing. However, the complex F0 variations from the tone co-articulations and the interplay effects among tonality make it rather difficult to perform tone recognition of Chinese continuous speech. This paper explored the application of Bidirectional Long Short-Term Memory (BLSTM), which had the capability of modeling time series, to Mandarin tone recognition to handle the tone variations in continuous speech. In addition, we introduced attention mechanism to guide the model to select the suitable context information. The experimental results showed that the performance of proposed CNN-BLSTM with attention mechanism was the best and it achieved the tone error rate (TER) of 9.30% with a 17.6% relative error reduction from the DNN baseline system with TER of 11.28%. It demonstrated that our proposed model was more effective to handle the complex F0 variations than other models.


 DOI: 10.21437/Interspeech.2018-2561

Cite as: Yang, L., Xie, Y., Zhang, J. (2018) Improving Mandarin Tone Recognition Using Convolutional Bidirectional Long Short-Term Memory with Attention. Proc. Interspeech 2018, 352-356, DOI: 10.21437/Interspeech.2018-2561.


@inproceedings{Yang2018,
  author={Longfei Yang and Yanlu Xie and Jinsong Zhang},
  title={Improving Mandarin Tone Recognition Using Convolutional Bidirectional Long Short-Term Memory with Attention},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={352--356},
  doi={10.21437/Interspeech.2018-2561},
  url={http://dx.doi.org/10.21437/Interspeech.2018-2561}
}