A technique for rapid speaker adaptation, called eigenvoices, was introduced recently. The key idea is to confine models in a very low-dimensional lin-ear vector space. This space summarizes a prioriknowl-edge that we have about speaker models. In many practical systems, however, there is a mismatch between the conditions in which the training data were collected and test conditions: prior knowledge becomes improper. Furthermore, prior statistics or models of this mismatch may not be available. We expose two key results: first, we use a maximum-likelihood estimator of prior infor-mation in matched conditions, called MLES, leading to an improvement of adaptation by a relative 14%, and second, we show how we can apply a blind scheme for learning noise, MLLR, achieving an additional 7:7% relative improvement in noisy conditions.
Cite as: Nguyen, P., Wellekens, C., Junqua, J.-C. (1999) Maximum likelihood eigenspace and MLLR for speech recognition in noisy environments. Proc. 6th European Conference on Speech Communication and Technology (Eurospeech 1999), 2519-2522, doi: 10.21437/Eurospeech.1999-551
@inproceedings{nguyen99_eurospeech, author={Patrick Nguyen and Christian Wellekens and Jean-Claude Junqua}, title={{Maximum likelihood eigenspace and MLLR for speech recognition in noisy environments}}, year=1999, booktitle={Proc. 6th European Conference on Speech Communication and Technology (Eurospeech 1999)}, pages={2519--2522}, doi={10.21437/Eurospeech.1999-551} }