Natural Language Generation: Creating Text

Claire Gardent


Natural Language Generation (NLG) aims at creating text based on some input (data, text, meaning representation) and some communicative goal (summarising, verbalising, comparing etc.). In the pre-neural era, differing input types and communicative goals led to distinct computational models. In contrast, deep learning encoder-decoder models introduced a shift of paradigm in that they provide a unifying framework for all NLG tasks. In my talk, I will start by briefly introducing the three main types of input considered in NLG. I will then give an overview of how neural models handle these and present some of the work we did on generating text from meaning representations, from data and from text.


Cite as: Gardent, C. (2019) Natural Language Generation: Creating Text. Proc. 10th ISCA Speech Synthesis Workshop.


@inproceedings{Gardent2019,
  author={Claire Gardent},
  title={{Natural Language Generation: Creating Text}},
  year=2019,
  booktitle={Proc. 10th ISCA Speech Synthesis Workshop}
}