Joint Syntactic and Semantic Analysis with a Multitask Deep Learning Framework for Spoken Language Understanding

Jeremie Tafforeau, Frederic Bechet, Thierry Artiere, Benoit Favre


Spoken Language Understanding (SLU) models have to deal with Automatic Speech Recognition outputs which are prone to contain errors. Most of SLU models overcome this issue by directly predicting semantic labels from words without any deep linguistic analysis. This is acceptable when enough training data is available to train SLU models in a supervised way. However for open-domain SLU, such annotated corpus is not easily available or very expensive to obtain, and generic syntactic and semantic models, such as dependency parsing, Semantic Role Labeling (SRL) or FrameNet parsing are good candidates if they can be applied to noisy ASR transcriptions with enough robustness. To tackle this issue we present in this paper an RNN-based architecture for performing joint syntactic and semantic parsing tasks on noisy ASR outputs. Experiments carried on a corpus of French spoken conversations collected in a telephone call-centre are reported and show that our strategy brings an improvement over the standard pipeline approach while allowing a lot more flexibility in the model design and optimization.


DOI: 10.21437/Interspeech.2016-851

Cite as

Tafforeau, J., Bechet, F., Artiere, T., Favre, B. (2016) Joint Syntactic and Semantic Analysis with a Multitask Deep Learning Framework for Spoken Language Understanding. Proc. Interspeech 2016, 3260-3264.

Bibtex
@inproceedings{Tafforeau+2016,
author={Jeremie Tafforeau and Frederic Bechet and Thierry Artiere and Benoit Favre},
title={Joint Syntactic and Semantic Analysis with a Multitask Deep Learning Framework for Spoken Language Understanding},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-851},
url={http://dx.doi.org/10.21437/Interspeech.2016-851},
pages={3260--3264}
}