Semi-supervised Cross-domain Visual Feature Learning for Audio-Visual Broadcast Speech Transcription

Rongfeng Su, Xunying Liu, Lan Wang


Visual information can be incorporated into automatic speech recognition (ASR) systems to improve their robustness in adverse acoustic conditions. Conventional audio-visual speech recognition (AVSR) systems require highly specialized audio-visual (AV) data in both system training and evaluation. For many real-world speech recognition applications, only audio information is available. This presents a major challenge to a wider application of AVSR systems. In order to address this challenge, this paper proposes a semi-supervised visual feature learning approach for developing AVSR systems on a DARPA GALE Mandarin broadcast transcription task. Audio to visual feature inversion long short-term memory neural networks (LSTMs) were initially constructed using limited amounts of out of domain AV data. The acoustic features domain mismatch against the broadcast data was further reduced using multi-level domain adaptive deep networks. Visual features were then automatically generated from the broadcast speech audio and used in both AVSR system training and testing time. Experimental results suggest a CNN based AVSR system using the proposed semi-supervised cross-domain audio-to-visual feature generation technique outperformed the baseline audio only CNN ASR system by an average CER reduction of 6.8% relative. In particular, on the most difficult Phoenix TV subset, a CER reduction of 1.32% absolute (8.34% relative) was obtained.


 DOI: 10.21437/Interspeech.2018-1063

Cite as: Su, R., Liu, X., Wang, L. (2018) Semi-supervised Cross-domain Visual Feature Learning for Audio-Visual Broadcast Speech Transcription. Proc. Interspeech 2018, 3509-3513, DOI: 10.21437/Interspeech.2018-1063.


@inproceedings{Su2018,
  author={Rongfeng Su and Xunying Liu and Lan Wang},
  title={Semi-supervised Cross-domain Visual Feature Learning for Audio-Visual Broadcast Speech Transcription},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={3509--3513},
  doi={10.21437/Interspeech.2018-1063},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1063}
}