Discriminative training of the feature space using the minimum phone error objective function has been shown to yield remarkable accuracy improvements. These gains, however, come at a high cost of memory. In this paper we present techniques that maintain fMPE performance while reducing the required memory by approximately 94%. This is achieved by designing a quantization methodology which minimizes the error between the true fMPE computation and that produced with the quantized parameters. Also illustrated is a Viterbi search over the allocation of quantization levels, providing a framework for optimal non-uniform allocation of quantization levels over the dimensions of the fMPE feature vector. This provides an additional 8% relative reduction in required memory with no loss in recognition accuracy.
Bibliographic reference. Marcheret, Etienne / Chen, Jia-Yu / Fousek, Petr / Olsen, Peder A. / Goel, Vaibhava (2009): "Compacting discriminative feature space transforms for embedded devices", In INTERSPEECH-2009, 228-231.