Odyssey 2012 - The Speaker and Language Recognition Workshop

Singapore
June 25-28, 2012

Bottleneck Features for Speaker Recognition

Sibel Yaman (1), Jason Pelecanos (1), Ruhi Sarikaya (2)

(1) IBM T. J. Watson Research Labs, Yorktown Heights, NY, USA
(2) Microsoft Corporation, Redmond, WA; usa

Bottleneck neural networks have recently found success in a variety of speech recognition tasks. This paper presents an approach in which they are utilized in the front-end of a speaker recognition system. The network inputs are mel-frequency cepstral coefficients (MFCCs) from multiple consecutive frames and the outputs are speaker labels. We propose using a recording-level criterion that is optimized via an online learning algorithm. We furthermore propose retraining a network to focus on its errors when leveraging scores from an independently trained system. We ran experiments on the same- and different-microphone tasks of the 2010 NIST Speaker Recognition Evaluation. We found that the proposed bottleneck feature extraction paradigm performs slightly worse than MFCCs but provides complementary information in combination. We also found that the proposed combination strategy with re-training improved the EER by 14% and 18% relative over the baseline MFCC system in the same- and different-microphone tasks respectively.

Full Paper

Bibliographic reference.  Yaman, Sibel / Pelecanos, Jason / Sarikaya, Ruhi (2012): "Bottleneck features for speaker recognition", In Odyssey-2012, 105-108.