8th International Conference on Spoken Language Processing

Jeju Island, Korea
October 4-8, 2004

Speaker Normalization through Constrained MLLR Based Transforms

Diego Giuliani (1), Matteo Gerosa (2), Fabio Brugnara (1)

(1) ITC-irst Centro per la Ricerca Scientifica e Tecnologica, Italy
(2) University of Trento, Italy

In this paper, a novel speaker normalization method is presented and compared to a well known vocal tract length normalization method. With this method, acoustic observations of training and testing speakers are mapped into a normalized acoustic space through speaker-specific transformations with the aim of reducing inter-speaker acoustic variability. For each speaker, an affine transformation is estimated with the goal of reducing the mismatch between the acoustic data of the speaker and a set of target hidden Markov models. This transformation is estimated through constrained maximum likelihood linear regression and then applied to map the acoustic observations of the speaker into the normalized acoustic space. Recognition experiments made use of two corpora, the first one consisting of adults' speech, the second one consisting of children's speech. Performing training and recognition with normalized data resulted in a consistent reduction of the word error rate with respect to the baseline systems trained on unnormalized data. In addition, the novel method always performed better than the reference vocal tract length normalization method.

Full Paper

Bibliographic reference.  Giuliani, Diego / Gerosa, Matteo / Brugnara, Fabio (2004): "Speaker normalization through constrained MLLR based transforms", In INTERSPEECH-2004, 2893-2896.