Ectc-Docd: An End-to-End Structure with CTC Encoder and OCD Decoder for Speech Recognition

Cheng Yi, Feng Wang, Bo Xu


Real-time streaming speech recognition is required by most applications for a nice interactive experience. To naturally support online recognition, a common strategy used in recently proposed end-to-end models is to introduce a blank label to the label set and instead output alignments. However, generating the alignment means decoding much longer than the length of the linguistic sequence. Besides, there exist several blank labels between two output units in the alignment, which hinders models from learning the adjacent dependency of units in the target sequence. In this work, we propose an innovative encoder-decoder structure, called Ectc-Docd, for online speech recognition which directly predicts the linguistic sequence without blank labels. Apart from the encoder and decoder structures, Ectc-Docd contains an additional shrinking layer to drop the redundant acoustic information. This layer serves as a bridge connecting acoustic representation and linguistic modelling parts. Through experiments, we confirm that Ectc-Docd can obtain better performance than a strong CTC model in online ASR tasks. We also show that Ectc-Docd can achieve promising results on both Mandarin and English ASR datasets with first and second pass decoding.


 DOI: 10.21437/Interspeech.2019-1212

Cite as: Yi, C., Wang, F., Xu, B. (2019) Ectc-Docd: An End-to-End Structure with CTC Encoder and OCD Decoder for Speech Recognition. Proc. Interspeech 2019, 4420-4424, DOI: 10.21437/Interspeech.2019-1212.


@inproceedings{Yi2019,
  author={Cheng Yi and Feng Wang and Bo Xu},
  title={{ Ectc-Docd: An End-to-End Structure with CTC Encoder and OCD Decoder for Speech Recognition}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={4420--4424},
  doi={10.21437/Interspeech.2019-1212},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1212}
}