ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

Weakly Supervised Construction of ASR Systems from Massive Video Data

Mengli Cheng, Chengyu Wang, Jun Huang, Xiaobo Wang

Despite the rapid development of deep learning models, for real-world applications, building large-scale Automatic Speech Recognition (ASR) systems from scratch is still significantly challenging, mostly due to the time-consuming and financially-expensive process of annotating a large amount of audio data with transcripts. Although several self-supervised pre-training models have been proposed to learn speech representations, applying such models directly might be sub-optimal if more labeled, training data could be obtained without a large cost.

In this paper, we present VideoASR, a weakly supervised framework for constructing ASR systems from massive video data. As user-generated videos often contain human-speech audio roughly aligned with subtitles, we consider videos as an important knowledge source, and propose an effective approach to extract high-quality audio aligned with transcripts from videos based on text detection and Optical Character Recognition. The underlying ASR models can be fine-tuned to fit any domain-specific target training datasets after weakly supervised pre-training on automatically generated datasets. Extensive experiments show that VideoASR can easily produce state-of-the-art results on six public datasets for Mandarin speech recognition. In addition, the VideoASR framework has been deployed on the cloud to support various industrial-scale applications.


doi: 10.21437/Interspeech.2021-7

Cite as: Cheng, M., Wang, C., Huang, J., Wang, X. (2021) Weakly Supervised Construction of ASR Systems from Massive Video Data. Proc. Interspeech 2021, 4533-4537, doi: 10.21437/Interspeech.2021-7

@inproceedings{cheng21c_interspeech,
  author={Mengli Cheng and Chengyu Wang and Jun Huang and Xiaobo Wang},
  title={{Weakly Supervised Construction of ASR Systems from Massive Video Data}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={4533--4537},
  doi={10.21437/Interspeech.2021-7}
}