We present a recurrent encoder-decoder deep neural network architecture
that directly translates speech in one language into text in another.
The model does not explicitly transcribe the speech into text in the
source language, nor does it require supervision from the ground truth
source language transcription during training. We apply a slightly
modified sequence-to-sequence with attention architecture that has
previously been used for speech recognition and show that it can be
repurposed for this more complex task, illustrating the power of attention-based
models.
A single model trained end-to-end obtains state-of-the-art performance
on the Fisher Callhome Spanish-English speech translation task, outperforming
a cascade of independently trained sequence-to-sequence speech recognition
and machine translation models by 1.8 BLEU points on the Fisher test
set. In addition, we find that making use of the training data in both
languages by multi-task training sequence-to-sequence speech translation
and recognition models with a shared encoder network can improve performance
by a further 1.4 BLEU points.
Cite as: Weiss, R.J., Chorowski, J., Jaitly, N., Wu, Y., Chen, Z. (2017) Sequence-to-Sequence Models Can Directly Translate Foreign Speech. Proc. Interspeech 2017, 2625-2629, doi: 10.21437/Interspeech.2017-503
@inproceedings{weiss17_interspeech, author={Ron J. Weiss and Jan Chorowski and Navdeep Jaitly and Yonghui Wu and Zhifeng Chen}, title={{Sequence-to-Sequence Models Can Directly Translate Foreign Speech}}, year=2017, booktitle={Proc. Interspeech 2017}, pages={2625--2629}, doi={10.21437/Interspeech.2017-503} }