Ninth International Conference on Spoken Language Processing

Pittsburgh, PA, USA
September 17-21, 2006

Improved Topic Classification Over Maximum Entropy Model Using K-Norm Based New Objectives

Xiang Li, Ea-Ee Jan, Cheng Wu, David Lubensky

IBM T.J. Watson Research Center, USA

Maximum Entropy (MaxEnt) model has been proven to be a very effective approach in the topic classification task, where a specific topic from a pre-defined topic set will be assigned to each sentence. Although it is originally developed based on the motivation of maximizing the conditional probability entropy under certain constraints, MaxEnt model is indeed an exponential distribution model that maximizes the log-likelihood of the training data. This log-likelihood criterion bears similarity with the classification accuracy criterion, which is the ultimate performance measure of a topic classifier. But these two criterion still differ from each other, and their discrepancy consequently reduces the benefit of optimization in improving classification accuracy. In this paper we propose to use different objective functions, which are closer to the classification accuracy criterion, to replace the log-likelihood objective used in the MaxEnt model estimation process. Specifically, we propose a Summation-Log K-norm objective and a Summation K-norm objective. Our experiments conducted on two large volume topic classification dataset prove the effectiveness of our new objectives in improving topic classification performance on top of the state-of-art MaxEnt model.

Full Paper

Bibliographic reference.  Li, Xiang / Jan, Ea-Ee / Wu, Cheng / Lubensky, David (2006): "Improved topic classification over maximum entropy model using k-norm based new objectives", In INTERSPEECH-2006, paper 2066-Mon2WeO.3.