End-to-End Speech-to-Dialog-Act Recognition

Viet-Trung Dang, Tianyu Zhao, Sei Ueno, Hirofumi Inaguma, Tatsuya Kawahara


Spoken language understanding, which extracts intents and/or semantic concepts in utterances, is conventionally formulated as a post-processing of automatic speech recognition. It is usually trained with oracle transcripts, but needs to deal with errors by ASR. Moreover, there are acoustic features which are related with intents but not represented with the transcripts. In this paper, we present an end-to-end model that directly converts speech into dialog acts without the deterministic transcription process. In the proposed model, the dialog act recognition network is conjunct with an acoustic-to-word ASR model at its latent layer before the softmax layer, which provides a distributed representation of word-level ASR decoding information. Then, the entire network is fine-tuned in an end-to-end manner. This allows for stable training as well as robustness against ASR errors. The model is further extended to conduct DA segmentation jointly. Evaluations with the Switchboard corpus demonstrate that the proposed method significantly improves dialog act recognition accuracy from the conventional pipeline framework.


 DOI: 10.21437/Interspeech.2020-1062

Cite as: Dang, V., Zhao, T., Ueno, S., Inaguma, H., Kawahara, T. (2020) End-to-End Speech-to-Dialog-Act Recognition. Proc. Interspeech 2020, 3910-3914, DOI: 10.21437/Interspeech.2020-1062.


@inproceedings{Dang2020,
  author={Viet-Trung Dang and Tianyu Zhao and Sei Ueno and Hirofumi Inaguma and Tatsuya Kawahara},
  title={{End-to-End Speech-to-Dialog-Act Recognition}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3910--3914},
  doi={10.21437/Interspeech.2020-1062},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1062}
}