We present a novel approach for training a multi-layered perceptron (MLP) in a semi-supervised fashion. Our objective function, when optimized, balances training set accuracy with fidelity to a graph-based manifold over all points. Additionally, the objective favors smoothness via an entropy regularizer over classifier outputs as well as straightforward 2 regularization. Our approach also scales well enough to enable large-scale training. The results demonstrate significant improvement on several phone classification tasks over baseline MLPs.
Bibliographic reference. Malkin, Jonathan / Subramanya, Amarnag / Bilmes, Jeff (2009): "On the semi-supervised learning of multi-layered perceptrons", In INTERSPEECH-2009, 660-663.