Speaker-Targeted Audio-Visual Models for Speech Recognition in Cocktail-Party Environments

Guan-Lin Chao, William Chan, Ian Lane

Speech recognition in cocktail-party environments remains a significant challenge for state-of-the-art speech recognition systems, as it is extremely difficult to extract an acoustic signal of an individual speaker from a background of overlapping speech with similar frequency and temporal characteristics. We propose the use of speaker-targeted acoustic and audio-visual models for this task. We complement the acoustic features in a hybrid DNN-HMM model with information of the target speaker’s identity as well as visual features from the mouth region of the target speaker. Experimentation was performed using simulated cocktail-party data generated from the GRID audio-visual corpus by overlapping two speakers’s speech on a single acoustic channel. Our audio-only baseline achieved a WER of 26.3%. The audio-visual model improved the WER to 4.4%. Introducing speaker identity information had an even more pronounced effect, improving the WER to 3.6%. Combining both approaches, however, did not significantly improve performance further. Our work demonstrates that speaker-targeted models can significantly improve the speech recognition in cocktail-party environments.

DOI: 10.21437/Interspeech.2016-599

Cite as

Chao, G., Chan, W., Lane, I. (2016) Speaker-Targeted Audio-Visual Models for Speech Recognition in Cocktail-Party Environments. Proc. Interspeech 2016, 2120-2124.

author={Guan-Lin Chao and William Chan and Ian Lane},
title={Speaker-Targeted Audio-Visual Models for Speech Recognition in Cocktail-Party Environments},
booktitle={Interspeech 2016},