14thAnnual Conference of the International Speech Communication Association

Lyon, France
August 25-29, 2013

Deep Segmental Neural Networks for Speech Recognition

Ossama Abdel-Hamid (1), Li Deng (2), Dong Yu (2), Hui Jiang (1)

(1) York University, Canada
(2) Microsoft Research, USA

Hybrid systems which integrate the deep neural network (DNN) and hidden Markov model (HMM) have recently achieved remarkable performance in many large vocabulary speech recognition tasks. These systems, however, remain to rely on the HMM and assume the acoustic scores for the (windowed) frames are independent given the state, suffering from the same difficulty as in the previous GMM-HMM systems. In this paper, we propose the deep segmental neural network (DSNN), a segmental model that uses DNNs to estimate the acoustic scores of phonemic or sub-phonemic segments with variable lengths. This allows the DSNN to represent each segment as a single unit, in which frames are made dependent on each other. We describe the architecture of the DSNN, as well as its learning and decoding algorithms. Our evaluation experiments demonstrate that the DSNN can outperform the DNN/HMM hybrid systems and two existing segmental models including the segmental conditional random field and the shallow segmental neural network.

Full Paper

Bibliographic reference.  Abdel-Hamid, Ossama / Deng, Li / Yu, Dong / Jiang, Hui (2013): "Deep segmental neural networks for speech recognition", In INTERSPEECH-2013, 1849-1853.