4^{th} International Conference on Spoken Language ProcessingPhiladelphia, PA, USA |
Neural network classifiers can provide outputs that estimate Bayesian posterior probabilities under the assumptions that an infinite amount of training data are available, the network is sufficiently complex and the training can reach the global minimum. In practice, however, the number of training tokens is limited and may not accurately reflect the prior class probabilities and true likelihood distributions. Additionally, computational constraints place a limit on the complexity of the network. Consequently, practical networks often fall far short of being ideal estimators. We address this problem and propose a new method of improved probability estimation by combining neural network models with empirical probability estimation methods. We use a histogram-based estimation method to remap the network outputs to match the data and thereby improve the accuracy of the probability estimates. Our current experiments on the OGI Census Year corpus resulted in a 20.6% reduction in recognition errors at the utterance level.
Bibliographic reference. Wei, Wei / Barnard, Etienne / Fanty, Mark (1996): "Improved probability estimation with neural network models", In ICSLP-1996, 502-505.