Sequence Summarizing Neural Networks for Spoken Language Recognition

Jan Pešán, Lukáš Burget, Jan Černocký


This paper explores the use of Sequence Summarizing Neural Networks (SSNNs) as a variant of deep neural networks (DNNs) for classifying sequences. In this work, it is applied to the task of spoken language recognition. Unlike other classification tasks in speech processing where the DNN needs to produce a per-frame output, language is considered constant during an utterance. We introduce a summarization component into the DNN structure producing one set of language posteriors per utterance. The training of the DNN is performed by an appropriately modified gradient-descent algorithm. In our initial experiments, the SSNN results are compared to a single state-of-the-art i-vector based baseline system with a similar complexity (i.e. no system fusion, etc.). For some conditions, SSNNs is able to provide performance comparable to the baseline system. Relative improvement up to 30% is obtained with the score level fusion of the baseline and the SSNN systems.


DOI: 10.21437/Interspeech.2016-764

Cite as

Pešán, J., Burget, L., Černocký, J. (2016) Sequence Summarizing Neural Networks for Spoken Language Recognition. Proc. Interspeech 2016, 3285-3288.

Bibtex
@inproceedings{Pešán+2016,
author={Jan Pešán and Lukáš Burget and Jan Černocký},
title={Sequence Summarizing Neural Networks for Spoken Language Recognition},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-764},
url={http://dx.doi.org/10.21437/Interspeech.2016-764},
pages={3285--3288}
}