15th Annual Conference of the International Speech Communication Association

September 14-18, 2014

Recent Improvements in Neural Network Acoustic Modeling for LVCSR in Low Resource Languages

Jia Cui (1), Bhuvana Ramabhadran (1), Xiaodong Cui (1), Andrew Rosenberg (2), Brian Kingsbury (1), Abhinav Sethy (1)

(1) IBM T.J. Watson Research Center, USA
(2) CUNY Queens College, USA

In this paper we focus on several techniques that improve deep neural network (DNN) acoustic modeling for low-resource languages. We explore the use of different features such as, fundamental-frequency variation (FFV), tonal features, and normalization of these features for deep neural network training. Specifically we study the impact of these features in conjunction with a tonal lexicon and several neural network architectures including hybrid and bottleneck feature-based configurations. We also explore the use of un-transcribed data and ways to balance it with transcribed data, to enhance the performance of the best performing LVCSR system. Results are presented in the context of the IARPA Babel program on development languages from Babel option period as well as on the surprise language from the base period of the program. We show that these improved methods can provide up to 15% relative reduction in WER and improvements in keyword search, in the languages explored under the BABEL program.

Full Paper

Bibliographic reference.  Cui, Jia / Ramabhadran, Bhuvana / Cui, Xiaodong / Rosenberg, Andrew / Kingsbury, Brian / Sethy, Abhinav (2014): "Recent improvements in neural network acoustic modeling for LVCSR in low resource languages", In INTERSPEECH-2014, 840-844.