EUROSPEECH '97
5th European Conference on Speech Communication and Technology

Rhodes, Greece
September 22-25, 1997


Sparse Connection and Pruning in Large Dynamic Artificial Neural Networks

Nikko Ström

Department of Speech, Music and Hearing, KTH (Royal Institute of Technology), Stockholm, Sweden

This paper presents new methods for training large neural networks for phoneme probability estimation. A combination of the time-delay architecture and the recurrent network architecture is used to capture the important dynamic information of the speech signal. Motivated by the fact that the number of connections in fully connected recurrent networks grows super-linear with the number of hidden units, schemes for sparse connection and connection pruning are explored. It is found that sparsely connected networks outperform their fully connected counterparts with an equal or smaller number of connections. The networks are evaluated in a hybrid HMM/ANN system for phoneme recognition on the TIMIT database. The achieved phoneme error-rate, 28.3%, for the standard 39 phoneme set on the core test-set of the TIMIT database is not far from the lowest reported. All training and simulation software used is made freely available by the author, making reproduction of the results feasible.

Full Paper

Bibliographic reference.  Ström, Nikko (1997): "Sparse connection and pruning in large dynamic artificial neural networks", In EUROSPEECH-1997, 2807-2810.