INTERSPEECH 2011
12th Annual Conference of the International Speech Communication Association

Florence, Italy
August 27-31. 2011

Robust Bimodal Person Identification Using Face and Speech with Limited Training Data and Corruption of Both Modalities

Niall McLaughlin, Ji Ming, Danny Crookes

Queen's University Belfast, UK

This paper presents a novel method of audio-visual fusion for person identification where both the speech and facial modalities may be corrupted, and there is a lack of prior knowledge about the corruption. Furthermore, we assume there is a limited amount of training data for each modality (e.g., a short training speech segment and a single training facial image for each person). A new representation and a modified cosine similarity are introduced for combining and comparing bimodal features with limited training data as well as vastly differing data rates and feature sizes. Optimal feature selection and multicondition training are used to reduce the mismatch between training and testing, thereby making the system robust to unknown bimodal corruption. Experiments have been carried out on a bimodal data set created from the SPIDRE and AR databases with variable noise corruption of speech and occlusion in the face images. The new method has demonstrated improved recognition accuracy.

Full Paper

Bibliographic reference.  McLaughlin, Niall / Ming, Ji / Crookes, Danny (2011): "Robust bimodal person identification using face and speech with limited training data and corruption of both modalities", In INTERSPEECH-2011, 585-588.