Adaptation of Neural Networks Constrained by Prior Statistics of Node Co-Activations

Tasha Nagamine, Zhuo Chen, Nima Mesgarani


We propose a novel unsupervised model adaptation framework in which a neural network uses prior knowledge of the statistics of its output and hidden layer activations to update its parameters online to improve performance in mismatched environments. This idea is inspired by biological neural networks, which use feedback to dynamically adapt their computation when faced with unexpected inputs. Here, we introduce an adaptation criterion for deep neural networks based on the observation that in matched testing and training conditions, the node co-activation statistics of each layer in a neural network are relatively stable over time. The proposed method thus adapts the model layer by layer to minimize the distance between the co-activation statistics of nodes in matched versus mismatched conditions. In phoneme classification experiments, we show that such node co-activation constrained adaptation in a deep neural network model significantly improves the recognition accuracy over baseline performance when the system is tested in various novel noises not included in the training.


DOI: 10.21437/Interspeech.2016-600

Cite as

Nagamine, T., Chen, Z., Mesgarani, N. (2016) Adaptation of Neural Networks Constrained by Prior Statistics of Node Co-Activations. Proc. Interspeech 2016, 1583-1587.

Bibtex
@inproceedings{Nagamine+2016,
author={Tasha Nagamine and Zhuo Chen and Nima Mesgarani},
title={Adaptation of Neural Networks Constrained by Prior Statistics of Node Co-Activations},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-600},
url={http://dx.doi.org/10.21437/Interspeech.2016-600},
pages={1583--1587}
}