M2H-GAN: A GAN-Based Mapping from Machine to Human Transcripts for Speech Understanding

Titouan Parcollet, Mohamed Morchid, Xavier Bost, Georges Linarès


Deep learning is at the core of recent spoken language understanding (SLU) related tasks. More precisely, deep neural networks (DNNs) drastically increased the performances of SLU systems, and numerous architectures have been proposed. In the real-life context of theme identification of telephone conversations, it is common to hold both a human, manual (TRS) and an automatically transcribed (ASR) versions of the conversations. Nonetheless, and due to production constraints, only the ASR transcripts are considered to build automatic classifiers. TRS transcripts are only used to measure the performances of ASR systems. Moreover, the recent performances in term of classification accuracy, obtained by DNN related systems are close to the performances reached by humans, and it becomes difficult to further increase the performances by only considering the ASR transcripts. This paper proposes to distillates the TRS knowledge available during the training phase within the ASR representation, by using a new generative adversarial network called M2H-GAN to generate a TRS-like version of an ASR document, to improve the theme identification performances.


 DOI: 10.21437/Interspeech.2019-2662

Cite as: Parcollet, T., Morchid, M., Bost, X., Linarès, G. (2019) M2H-GAN: A GAN-Based Mapping from Machine to Human Transcripts for Speech Understanding. Proc. Interspeech 2019, 804-808, DOI: 10.21437/Interspeech.2019-2662.


@inproceedings{Parcollet2019,
  author={Titouan Parcollet and Mohamed Morchid and Xavier Bost and Georges Linarès},
  title={{M2H-GAN: A GAN-Based Mapping from Machine to Human Transcripts for Speech Understanding}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={804--808},
  doi={10.21437/Interspeech.2019-2662},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2662}
}