Automatic Pronunciation Generation by Utilizing a Semi-Supervised Deep Neural Networks

Naoya Takahashi, Tofigh Naghibi, Beat Pfister


Phonemic or phonetic sub-word units are the most commonly used atomic elements to represent speech signals in modern ASRs. However they are not the optimal choice due to several reasons such as: large amount of effort required to handcraft a pronunciation dictionary, pronunciation variations, human mistakes and under-resourced dialects and languages. Here, we propose a data-driven pronunciation estimation and acoustic modeling method which only takes the orthographic transcription to jointly estimate a set of sub-word units and a reliable dictionary. Experimental results show that the proposed method which is based on semi-supervised training of a deep neural network largely outperforms phoneme based continuous speech recognition on the TIMIT dataset.


DOI: 10.21437/Interspeech.2016-761

Cite as

Takahashi, N., Naghibi, T., Pfister, B. (2016) Automatic Pronunciation Generation by Utilizing a Semi-Supervised Deep Neural Networks. Proc. Interspeech 2016, 1141-1145.

Bibtex
@inproceedings{Takahashi+2016,
author={Naoya Takahashi and Tofigh Naghibi and Beat Pfister},
title={Automatic Pronunciation Generation by Utilizing a Semi-Supervised Deep Neural Networks},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-761},
url={http://dx.doi.org/10.21437/Interspeech.2016-761},
pages={1141--1145}
}