This paper presents a new Bayes classification rule towards minimizing the predictive Bayes risk for robust speech recognition. Conventionally, the plug-in maximum a posteriori (MAP) classification is constructed by adopting nonparametric loss function and deterministic model parameters. Speech recognition performance is limited due to the environmental mismatch and the ill-posed model. Concerning these issues, we develop the predictive minimum Bayes risk (PMBR) classification where the predictive distributions are inherent in Bayes risk. More specifically, we exploit the Bayes loss function and the predictive word posterior probability for Bayes classification. Model mismatch and randomness are compensated to improve generalization capability in speech recognition. In the experiments on car speech recognition, we estimate the prior densities of hidden Markov model parameters from adaptation data. With the prior knowledge of new environment and model uncertainty, PMBR classification is realized and evaluated to be better than MAP, MBR and Bayesian predictive classification.
Bibliographic reference. Chien, Jen-Tzung / Shinoda, Koichi / Furui, Sadaoki (2007): "Predictive minimum Bayes risk classification for robust speech recognition", In INTERSPEECH-2007, 1062-1065.