End-to-end training of deep learning-based models allows for implicit learning of intermediate representations based on the final task loss. However, the end-to-end approach ignores the useful domain knowledge encoded in explicit intermediate-level supervision. We hypothesize that using intermediate representations as auxiliary supervision at lower levels of deep networks may be a good way of combining the advantages of end-to-end training and more traditional pipeline approaches. We present experiments on conversational speech recognition where we use lower-level tasks, such as phoneme recognition, in a multitask training approach with an encoder-decoder model for direct character transcription. We compare multiple types of lower-level tasks and analyze the effects of the auxiliary tasks. Our results on the Switchboard corpus show that this approach improves recognition accuracy over a standard encoder-decoder model on the Eval2000 test set.
Cite as: Toshniwal, S., Tang, H., Lu, L., Livescu, K. (2017) Multitask Learning with Low-Level Auxiliary Tasks for Encoder-Decoder Based Speech Recognition. Proc. Interspeech 2017, 3532-3536, doi: 10.21437/Interspeech.2017-1118
@inproceedings{toshniwal17_interspeech, author={Shubham Toshniwal and Hao Tang and Liang Lu and Karen Livescu}, title={{Multitask Learning with Low-Level Auxiliary Tasks for Encoder-Decoder Based Speech Recognition}}, year=2017, booktitle={Proc. Interspeech 2017}, pages={3532--3536}, doi={10.21437/Interspeech.2017-1118} }