Learning Natural Language Interfaces with Neural Models

Mirella Lapata


In Spike Jonze’s futuristic film “Her”, Theodore, a lonely writer, forms a strong emotional bond with Samantha, an operating system designed to meet his every need. Samantha can carry on seamless conversations with Theodore, exhibits a perfect command of language, and is able to take on complex tasks. She filters his emails for importance, allowing him to deal with information overload, she proactively arranges the publication of Theodore’s letters, and is able to give advice using common sense and reasoning skills.

In this talk I will present an overview of recent progress on learning natural language interfaces which might not be as clever as Samantha but nevertheless allow uses to interact with various devices and services using everyday language. I will address the structured prediction problem of mapping natural language utterances onto machine-interpretable representations and outline the various challenges it poses. For example, the fact that the translation of natural language to formal language is highly non-isomorphic, data for model training is scarce, and natural language can express the same information need in many different ways. I will describe a general modeling framework based on neural networks which tackles these challenges and improves the robustness of natural language interfaces.


Cite as: Lapata, M. (2019) Learning Natural Language Interfaces with Neural Models. Proc. Interspeech 2019.


@inproceedings{Lapata2019,
  author={Mirella Lapata},
  title={{Learning Natural Language Interfaces with Neural Models}},
  year=2019,
  booktitle={Proc. Interspeech 2019}
}