INTERSPEECH 2004 - ICSLP
8th International Conference on Spoken Language Processing

Jeju Island, Korea
October 4-8, 2004

Improving Eigenspace-based MLLR Adaptation by Kernel PCA

Brian Mak, Roger Hsiao

Department of Computer Science, Hong Kong University of Science and Technology, Hong Kong

Eigenspace-based MLLR (EMLLR) adaptation has been shown effective for fast speaker adaptation. It applies the basic idea of eigenvoice adaptation, and derives a small set of eigenmatrices using principal component analysis (PCA). The MLLR adaptation transformation of a new speaker is then a linear combination of the eigenmatrices. In this paper, we investigate the use of kernel PCA to find the eigenmatrices in the kernel-induced high dimensional feature space so as to exploit possible nonlinearity in the transformation supervector space. In addition, composite kernel is used to preserve the row information in the transformation supervector which, otherwise, will be lost during the mapping to the kernel-induced feature space. We call our new method kernel eigenspace-based MLLR (KEMLLR) adaptation. On a RM adaptation task, we find that KEMLLR adaptation may reduce the word error rate of a speakerindependent model by 11%, and outperforms MLLR and EMLLR adaptation.

Full Paper

Bibliographic reference.  Mak, Brian / Hsiao, Roger (2004): "Improving eigenspace-based MLLR adaptation by kernel PCA", In INTERSPEECH-2004, 13-16.