11th Annual Conference of the International Speech Communication Association

Makuhari, Chiba, Japan
September 26-30. 2010

Binary Coding of Speech Spectrograms Using a Deep Auto-Encoder

L. Deng (1), Michael L. Seltzer (1), Dong Yu (1), Alex Acero (1), Abdel-rahman Mohamed (2), G. Hinton (2)

(1) Microsoft Research, USA
(2) University of Toronto, Canada

This paper reports our recent exploration of the layer-by-layer learning strategy for training a multi-layer generative model of patches of speech spectrograms. The top layer of the generative model learns binary codes that can be used for efficient compression of speech and could also be used for scalable speech recognition or rapid speech content retrieval. Each layer of the generative model is fully connected to the layer below and the weights on these connections are pre-trained efficiently by using the contrastive divergence approximation to the log likelihood gradient. After layer-by-layer pre-training we “unroll” the generative model to form a deep auto-encoder, whose parameters are then fine-tuned using back-propagation. To reconstruct the full-length speech spectrogram, individual spectrogram segments predicted by their respective binary codes are combined using an overlap-and-add method. Experimental results on speech spectrogram coding demonstrate that the binary codes produce a log-spectral distortion that is approximately 2 dB lower than a sub-band vector quantization technique over the entire frequency range of wide-band speech.

Full Paper

Bibliographic reference.  Deng, L. / Seltzer, Michael L. / Yu, Dong / Acero, Alex / Mohamed, Abdel-rahman / Hinton, G. (2010): "Binary coding of speech spectrograms using a deep auto-encoder", In INTERSPEECH-2010, 1692-1695.