Interspeech'2005 - Eurospeech

Lisbon, Portugal
September 4-8, 2005

Channel Robust Speaker Verification via Bayesian Blind Stochastic Feature Transformation

Kwok-Kwong Yiu (1), Man-Wai Mak (1), Sun-Yuan Kung (2)

(1) Hong Kong Polytechnic University, China; (2) Princeton University, USA

In telephone-based speaker verification, the channel conditions can be varied significantly from sessions to sessions. Therefore, it is desirable to estimate the channel conditions online and compensate the acoustic distortion without prior knowledge of the channel characteristics. Because no a priori knowledge is used, the estimation accuracy depends greatly on the length of the verification utterances. This paper extends the Blind Stochastic Feature Transformation (BSFT) algorithm that we recently proposed to handle the short-utterance scenario. The idea is to estimate a set of prior transformation parameters from a development set in which a wide variety of channel conditions exists in the verification utterances. The prior transformations are then incorporated into the online estimation of the BSFT parameters in a Bayesian (maximum a posteriori) fashion. The resulting transformation parameters are therefore dependent on both the prior transformations and the verification utterances. For short (long) utterances, the prior transformations play a more (less) important role. We referred the extended algorithm to as Bayesian BSFT (BBSFT) and applied it to the 2001 NIST SRE task. Results show that Bayesian BSFT outperforms BSFT for utterances shorter than or equal to 4 seconds.

Full Paper

Bibliographic reference.  Yiu, Kwok-Kwong / Mak, Man-Wai / Kung, Sun-Yuan (2005): "Channel robust speaker verification via Bayesian blind stochastic feature transformation", In INTERSPEECH-2005, 2013-2016.