Compositional Neural Network Language Models for Agglutinative Languages

Ebru Arisoy, Murat Saraclar


Continuous space language models (CSLMs) have been proven to be successful in speech recognition. With proper training of the word embeddings, words that are semantically or syntactically related are expected to be mapped to nearby locations in the continuous space. In agglutinative languages, words are made up of concatenation of stems and suffixes and, as a result, compositional modeling is important. However, when trained on word tokens, CSLMs do not explicitly consider this structure. In this paper, we explore compositional modeling of stems and suffixes in a long short-term memory neural network language model. Our proposed models jointly learn distributed representations for stems and endings (concatenation of suffixes) and predict the probability for stem and ending sequences. Experiments on the Turkish Broadcast news transcription task show that further gains on top of a state-of-the-art stem-ending-based n-gram language model can be obtained with the proposed models.


DOI: 10.21437/Interspeech.2016-1239

Cite as

Arisoy, E., Saraclar, M. (2016) Compositional Neural Network Language Models for Agglutinative Languages. Proc. Interspeech 2016, 3494-3498.

Bibtex
@inproceedings{Arisoy+2016,
author={Ebru Arisoy and Murat Saraclar},
title={Compositional Neural Network Language Models for Agglutinative Languages},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-1239},
url={http://dx.doi.org/10.21437/Interspeech.2016-1239},
pages={3494--3498}
}