An Investigation of Recurrent Neural Network Architectures Using Word Embeddings for Phrase Break Prediction

Anandaswarup Vadapalli, Suryakanth V. Gangashetty


This paper presents our investigations of recurrent neural networks (RNNs) for the phrase break prediction task. With the advent of deep learning, there have been attempts to apply deep neural networks (DNNs) to phrase break prediction. While deep neural networks are able to effectively capture dependencies across features, they lack the ability to capture long-term relations that are spread over time. On the other hand, RNNs are able to capture long-term temporal relations and thus are better suited for tasks where sequences have to be modeled. We model the phrase break prediction task as a sequence labeling task, and show by means of experimental results that RNNs perform better at phrase break prediction as compared to conventional DNN systems.


DOI: 10.21437/Interspeech.2016-885

Cite as

Vadapalli, A., Gangashetty, S.V. (2016) An Investigation of Recurrent Neural Network Architectures Using Word Embeddings for Phrase Break Prediction. Proc. Interspeech 2016, 2308-2312.

Bibtex
@inproceedings{Vadapalli+2016,
author={Anandaswarup Vadapalli and Suryakanth V. Gangashetty},
title={An Investigation of Recurrent Neural Network Architectures Using Word Embeddings for Phrase Break Prediction},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-885},
url={http://dx.doi.org/10.21437/Interspeech.2016-885},
pages={2308--2312}
}