## Interspeech'2005 - Eurospeech## Lisbon, Portugal |

In this work, fundamental properties of Bayes decision rule using general loss functions are derived analytically and are verified experimentally for automatic speech recognition. It is shown that, for maximum posterior probabilities larger than 1/2, Bayes decision rule with a metric loss function always decides on the posterior maximizing class independent of the specific choice of (metric) loss function. Also for maximum posterior probabilities less than 1/2, a condition is derived under which the Bayes risk using a general metric loss function is still minimized by the posterior maximizing class. For a speech recognition task with low initial word error rate, it is shown that nearly 2/3 of the test utterances fulfil these conditions and need not be considered for Bayes risk minimization with Levenshtein loss, which reduces the computational complexity of Bayes risk minimization. In addition, bounds for the difference between the Bayes risk for the posterior maximizing class and minimum Bayes risk are derived, which can serve as cost estimates for Bayes risk minimization approaches.

__Bibliographic reference.__
Schlüter, Ralf / Scharrenbach, T. / Steinbiss, Volker / Ney, Hermann (2005):
"Bayes risk minimization using metric loss functions",
In *INTERSPEECH-2005*, 1449-1452.