On the Efficient Representation and Execution of Deep Acoustic Models

Raziel Alvarez, Rohit Prabhavalkar, Anton Bakhtin


In this paper we present a simple and computationally efficient quantization scheme that enables us to reduce the resolution of the parameters of a neural network from 32-bit floating point values to 8-bit integer values. The proposed quantization scheme leads to significant memory savings and enables the use of optimized hardware instructions for integer arithmetic, thus significantly reducing the cost of inference. Finally, we propose a ‘quantization aware’ training process that applies the proposed scheme during network training and find that it allows us to recover most of the loss in accuracy introduced by quantization. We validate the proposed techniques by applying them to a long short-term memory-based acoustic model on an open-ended large vocabulary speech recognition task.


DOI: 10.21437/Interspeech.2016-128

Cite as

Alvarez, R., Prabhavalkar, R., Bakhtin, A. (2016) On the Efficient Representation and Execution of Deep Acoustic Models. Proc. Interspeech 2016, 2746-2750.

Bibtex
@inproceedings{Alvarez+2016,
author={Raziel Alvarez and Rohit Prabhavalkar and Anton Bakhtin},
title={On the Efficient Representation and Execution of Deep Acoustic Models},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-128},
url={http://dx.doi.org/10.21437/Interspeech.2016-128},
pages={2746--2750}
}