Empirical Evaluation of Sequence-to-Sequence Models for Word Discovery in Low-Resource Settings

Marcely Zanon Boito, Aline Villavicencio, Laurent Besacier


Since Bahdanau et al. [1] first introduced attention for neural machine translation, most sequence-to-sequence models made use of attention mechanisms [2, 3, 4]. While they produce soft-alignment matrices that could be interpreted as alignment between target and source languages, we lack metrics to quantify their quality, being unclear which approach produces the best alignments. This paper presents an empirical evaluation of 3 of the main sequence-to-sequence models for word discovery from unsegmented phoneme sequences: CNN, RNN and Transformer-based. This task consists in aligning word sequences in a source language with phoneme sequences in a target language, inferring from it word segmentation on the target side [5]. Evaluating word segmentation quality can be seen as an extrinsic evaluation of the soft-alignment matrices produced during training. Our experiments in a low-resource scenario on Mboshi and English languages (both aligned to French) show that RNNs surprisingly outperform CNNs and Transformer for this task. Our results are confirmed by an intrinsic evaluation of alignment quality through the use Average Normalized Entropy (ANE). Lastly, we improve our best word discovery model by using an alignment entropy confidence measure that accumulates ANE over all the occurrences of a given alignment pair in the collection.


 DOI: 10.21437/Interspeech.2019-2029

Cite as: Boito, M.Z., Villavicencio, A., Besacier, L. (2019) Empirical Evaluation of Sequence-to-Sequence Models for Word Discovery in Low-Resource Settings. Proc. Interspeech 2019, 2688-2692, DOI: 10.21437/Interspeech.2019-2029.


@inproceedings{Boito2019,
  author={Marcely Zanon Boito and Aline Villavicencio and Laurent Besacier},
  title={{Empirical Evaluation of Sequence-to-Sequence Models for Word Discovery in Low-Resource Settings}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={2688--2692},
  doi={10.21437/Interspeech.2019-2029},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2029}
}