Human language is messy, and machine learning has done a lot to tame this messiness. There are many facets to language processing, and while the common approach is to run a bunch of component systems in a pipeline, there is mounting evidence that this is a bad idea. Enter transfer learning and multitask learning. Unfortunately, there are many open-ended problems in transfer learning for language due to the sorts of data and annotations that we have easy access to in the language domain. This talk will highlight some successful attempts to use transfer learning (generative and otherwise) in language, but will also talk a good deal about what is unsolved, and point to some interesting current avenues of research.
Cite as: Daumé, H. (2012) Transfer learning in language. Proc. Machine Learning in Speech and Language Processing (MLSLP 2012)
@inproceedings{daume12_mlslp, author={Hal Daumé}, title={{Transfer learning in language}}, year=2012, booktitle={Proc. Machine Learning in Speech and Language Processing (MLSLP 2012)} }