The boundary contraction training for acoustic models based on deep neural networks in a discrete system (discrete DNNs) is presented in this paper. Representing the parameters of DNNs with small bits (such as 4 bits) can reduce not only memory usage but also computational complexity by utilizing a CPU cache and look-up tables efficiently. However, simply quantizing parameters of the normal continuous DNNs degrades the recognition accuracy seriously. We tackle this problem by developing a specialized training algorithm for discrete DNNs. In our algorithm, continuous DNNs with boundary constraint are first trained, and the trained parameters are then quantized to meet the representation of discrete DNNs. When training continuous DNNs, we introduce the boundary contraction mapping to shrink the distribution of parameters for reducing the quantization error. In our experiments with 4-bit discrete DNNs, while simply quantizing normally trained DNNs degrades the word accuracy by more than 50 points, our method can maintain the high word accuracy of DNNs with only two points degradation.
Bibliographic reference. Takeda, Ryu / Kanda, Naoyuki / Nukaga, Nobuo (2014): "Boundary contraction training for acoustic models based on discrete deep neural networks", In INTERSPEECH-2014, 1063-1067.