In poor room acoustics conditions, speech signals received by a microphone might become corrupted by the signals’ delayed versions that are reflected from the room surfaces (e.g. wall, floor). This phenomenon, reverberation, drops the accuracy of automatic speaker verification systems by causing mismatch between the training and testing. Since reverberation causes temporal smearing to the signal, one way to tackle its effects is to study robust feature extraction, particularly based on long-time temporal feature extraction. This approach has been adopted previously in the form of 2-dimensional autoregressive (2DAR) feature extraction scheme by using frequency domain linear prediction (FDLP). In 2DAR, FDLP processing is followed by time domain linear prediction (TDLP). In the current study, we propose modifying the latter part of the 2DAR feature extraction scheme by replacing TDLP with time-varying linear prediction (TVLP) to add an extra layer of temporal processing. Our speaker verification experiments using the proposed features with the text-dependent RedDots corpus show small but consistent improvements in clean and reverberant conditions (up to 6.5%) over the 2DAR features and large improvements over the MFCC features in reverberant conditions (up to 46.5%).
Cite as: Vestman, V., Gowda, D., Sahidullah, M., Alku, P., Kinnunen, T. (2017) Time-Varying Autoregressions for Speaker Verification in Reverberant Conditions. Proc. Interspeech 2017, 1512-1516, doi: 10.21437/Interspeech.2017-734
@inproceedings{vestman17_interspeech, author={Ville Vestman and Dhananjaya Gowda and Md. Sahidullah and Paavo Alku and Tomi Kinnunen}, title={{Time-Varying Autoregressions for Speaker Verification in Reverberant Conditions}}, year=2017, booktitle={Proc. Interspeech 2017}, pages={1512--1516}, doi={10.21437/Interspeech.2017-734} }