Improving Boundary Estimation in Audiovisual Speech Activity Detection Using Bayesian Information Criterion

Fei Tao, John H.L. Hansen, Carlos Busso


A key preprocessing step in multimodal interfaces is to detect when a user is speaking to the system. While push-to-talk approaches are effective, its use limits the flexibility of the system. Solutions based on speech activity detection (SAD) offer more intuitive and user-friendly alternatives. A limitation in current SAD solutions is the drop in performance observed in noisy environments or when the speech mode differs from neutral speech (e.g., whisper speech). Emerging audiovisual solutions provide a principled framework to improve detection of speech boundaries by incorporating lip activity detection. In our previous work, we proposed an unsupervised visual speech activity detection (V-SAD) system that combines temporal and dynamic facial features. The key limitation of the system was the precise detection of boundaries between speech and non-speech regions due to anticipatory facial movements and low video resolution (29.97fps). This study builds upon this system by (a) combining speech and facial features creating an unsupervised audiovisual speech activity detection (AV-SAD) system, (b) refining the decision boundary with the Bayesian information criterion (BIC) algorithm, resulting in improved speech boundary detection. The evaluation considers the challenging case of whisper speech, where the proposed AV-SAD achieves a 10% absolute improvement over a state-of-the-art audio SAD.


DOI: 10.21437/Interspeech.2016-406

Cite as

Tao, F., Hansen, J.H., Busso, C. (2016) Improving Boundary Estimation in Audiovisual Speech Activity Detection Using Bayesian Information Criterion. Proc. Interspeech 2016, 2130-2134.

Bibtex
@inproceedings{Tao+2016,
author={Fei Tao and John H.L. Hansen and Carlos Busso},
title={Improving Boundary Estimation in Audiovisual Speech Activity Detection Using Bayesian Information Criterion},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-406},
url={http://dx.doi.org/10.21437/Interspeech.2016-406},
pages={2130--2134}
}