Attention-Based Word Vector Prediction with LSTMs and its Application to the OOV Problem in ASR

Alejandro Coucheiro-Limeres, Fernando Fernández-Martínez, Rubén San-Segundo, Javier Ferreiros-López


We propose three architectures for a word vector prediction system (WVPS) built with LSTMs that consider both past and future contexts of a word for predicting a vector in an embedded space where its surrounding area is semantically related to the considered word. We introduce an attention mechanism in one of the architectures so the system is able to assess the specific contribution of each context word to the prediction. All the architectures are trained under the same conditions and the same training material, following a curricular-learning fashion in the presentation of the data. For the inputs, we employ pre-trained word embeddings. We evaluate the systems after the same number of training steps, over two different corpora composed of ground-truth speech transcriptions in Spanish language from TCSTAR and TV recordings used in the Search on Speech Challenge of IberSPEECH 2018. The results show that we are able to reach significant differences between the architectures, consistently across both corpora. The attention-based architecture achieves the best results, suggesting its adequacy for the task. Also, we illustrate the usefulness of the systems for resolving out-of-vocabulary (OOV) regions marked by an ASR system capable of detecting OOV occurrences.


 DOI: 10.21437/Interspeech.2019-2347

Cite as: Coucheiro-Limeres, A., Fernández-Martínez, F., San-Segundo, R., Ferreiros-López, J. (2019) Attention-Based Word Vector Prediction with LSTMs and its Application to the OOV Problem in ASR. Proc. Interspeech 2019, 3520-3524, DOI: 10.21437/Interspeech.2019-2347.


@inproceedings{Coucheiro-Limeres2019,
  author={Alejandro Coucheiro-Limeres and Fernando Fernández-Martínez and Rubén San-Segundo and Javier Ferreiros-López},
  title={{Attention-Based Word Vector Prediction with LSTMs and its Application to the OOV Problem in ASR}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={3520--3524},
  doi={10.21437/Interspeech.2019-2347},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2347}
}