Ultra-Compact NLU: Neuronal Network Binarization as Regularization

Munir Georges, Krzysztof Czarnowski, Tobias Bocklet


This paper describes an approach for intent classification and tagging on embedded devices, such as smart watches. We describe a technique to train neuronal networks where the final neuronal network weights are binary. This enables memory bandwidth optimized inference and efficient computation even on constrained/embedded platforms.

The flow of the approach is as follows: tf-idf word selection method reduces the number of overall weights. Bag-of-Words features are used with a feedforward and recurrent neuronal network for intent classification and tagging, respectively. A novel double Gaussian based regularization term is used to train the network. Finally, the weights are almost clipped lossless to -1 or 1 which results in a tiny binary neuronal network for intent classification and tagging.

Our technique is evaluated using a text corpus of transcribed and annotated voice queries. The test domain is “lights control”. We compare the intent and tagging accuracy of the ultra-compact binary neuronal network with our baseline system. The novel approach yields comparable accuracy but reduces the model size by a factor of 16: from 160kB to 10kB.


 DOI: 10.21437/Interspeech.2019-2591

Cite as: Georges, M., Czarnowski, K., Bocklet, T. (2019) Ultra-Compact NLU: Neuronal Network Binarization as Regularization. Proc. Interspeech 2019, 809-813, DOI: 10.21437/Interspeech.2019-2591.


@inproceedings{Georges2019,
  author={Munir Georges and Krzysztof Czarnowski and Tobias Bocklet},
  title={{Ultra-Compact NLU: Neuronal Network Binarization as Regularization}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={809--813},
  doi={10.21437/Interspeech.2019-2591},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2591}
}