INTERSPEECH 2013
14thAnnual Conference of the International Speech Communication Association

Lyon, France
August 25-29, 2013

Neural Network Acoustic Models for the DARPA RATS Program

Hagen Soltau, Hong-Kwang Kuo, Lidia Mangu, George Saon, Tomas Beran

IBM T.J. Watson Research Center, USA

We present a comparison of acoustic modeling techniques for the DARPA RATS program in the context of spoken term detection (STD) on speech data with severe channel distortions. Our main findings are that both Multi-Layer Perceptrons (MLPs) and Convolutional Neural Networks (CNNs) outperform Gaussian Mixture Models (GMMs) on a very difficult LVCSR task. We discuss pretraining, feature sets and training procedures, as well as weight sharing and shift invariance to increase robustness against channel distortions. We obtained about 20% error rate reduction over our state-of-the-art GMM system. Additionally, we found that CNNs work very well for spoken term detection, as a result of better lattice oracle rates compared to GMMs and MLPs.

Full Paper

Bibliographic reference.  Soltau, Hagen / Kuo, Hong-Kwang / Mangu, Lidia / Saon, George / Beran, Tomas (2013): "Neural network acoustic models for the DARPA RATS program", In INTERSPEECH-2013, 3092-3096.