In this paper we propose a novel multiclass classifier called the probabilistic linear machine (PLM) which overcomes the lowentropy problem of exponential-based classifiers. Although PLMs are linear classifiers, we use a careful design of the parameters matched with weak requirements over the features to output a true probability distribution over labels given an input instance. We cast the discriminative learning problem as linear programming, which can scale up to large problems on the order of millions of training samples. Our experiments on phonetic classification show that PLM achieves high entropy while maintaining a comparable accuracy to other state-of-the-art classifiers.
Bibliographic reference. Lin, Hui / Bilmes, Jeff / Crammer, Koby (2009): "How to loose confidence: probabilistic linear machines for multiclass classification", In INTERSPEECH-2009, 2559-2562.