Symposium on Machine Learning in Speech and Language Processing (MLSLP)
Portland, Oregon, USA
A review on the key concepts and techniques in kernel-based learning will be provided that are relevant to extending and advancing deep learning architectures and algorithms. I will then survey and analyze the counterparts in deep learning relevant to extending kernel methods. Insight into how the generally finite-dimensional (hidden) features in deep networks can be transformed into infinite-dimensional features via the kernel trick without incurring computational and regularization difficulty will be elaborated. Insight is also offered into how the use of deep architectures can overcome the potential limitation of linear pattern functions associated with the kernel feature space. These insights lead to integrated kernel and deep learning architectures, with interesting new regularization properties and hyper-parameter sets that are distinct from those associated with separate kernel and deep learning methods. Relevance of such new sets of machine learning methods to speech and language processing will be discussed.
Bibliographic reference. Deng, Li (2012): "Learning deep architectures using kernel modules", In MLSLP-2012 (abstract).