Symposium on Machine Learning in Speech and Language Processing (MLSLP)

Bellevue, WA, USA
June 27, 2011

On Learning Distributed Representations of Semantics

Yoshua Bengio

University of Montreal, Canada

Machine learning algorithms try to characterize configurations of variables that are plausible (somehow similar to those seen in the training set, and predictive of those that could be seen in a test set) so as to be able to answer questions about these configurations. A general approach towards this goal is to learn *representations* of these configurations that help to generalize to new configurations. We expose the statistical advantages of representations that are *distributed* and *deep* (at multiple levels of representation) and survey some of the advances in such feature learning algorithms, along with some of our recent work in this area, for natural language processing and pattern recognition. In particular, we highlight our effort towards modeling semantics beyond single-word embeddings, to capture relations between concepts and produce models of 2-argument relations such as (subject, verb, object) seen as (argument1, relation, argument2) that can be used to answer questions, disambiguate text, and learn from free text and knowledge bases in the same representational space.

Bibliographic reference.  Bengio, Yoshua (2011): "On learning distributed representations of semantics", In MLSLP-2011 (abstract).