Audiovisual Speech Activity Detection with Advanced Long Short-Term Memory

Fei Tao, Carlos Busso


Speech activity detection (SAD) is a key pre-processing step for a speech-based system. The performance of conventional audio-only SAD (A-SAD) systems is impaired by acoustic noise when they are used in practical applications. An alternative approach to address this problem is to include visual information, creating audiovisual speech activity detection (AV-SAD) solutions. In our previous work, we proposed to build an AV-SAD system using bimodal recurrent neural network (BRNN). This framework was able to capture the task-related characteristics in the audio and visual inputs and model the temporal information within and across modalities. The approach relied on long short-term memory (LSTM). Although LSTM can model longer temporal dependencies with the cells, the effective memory of the units is limited to a few frames, since the recurrent connection only considers the previous frame. For SAD systems, it is important to model longer temporal dependencies to capture the semi-periodic nature of speech conveyed in acoustic and orofacial features. This study proposes to implement a BRNN-based AV-SAD system with advanced LSTMs (A-LSTMs), which overcomes this limitation by including multiple connections to frames in the past. The results show that the proposed framework can significantly outperform the BRNN system trained with the original LSTM layers.


 DOI: 10.21437/Interspeech.2018-2490

Cite as: Tao, F., Busso, C. (2018) Audiovisual Speech Activity Detection with Advanced Long Short-Term Memory. Proc. Interspeech 2018, 1244-1248, DOI: 10.21437/Interspeech.2018-2490.


@inproceedings{Tao2018,
  author={Fei Tao and Carlos Busso},
  title={Audiovisual Speech Activity Detection with Advanced Long Short-Term Memory},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={1244--1248},
  doi={10.21437/Interspeech.2018-2490},
  url={http://dx.doi.org/10.21437/Interspeech.2018-2490}
}