Context Adaptive Neural Network for Rapid Adaptation of Deep CNN Based Acoustic Models

Marc Delcroix, Keisuke Kinoshita, Atsunori Ogawa, Takuya Yoshioka, Dung T. Tran, Tomohiro Nakatani


Using auxiliary input features has been seen as one of the most effective ways to adapt deep neural network (DNN)-based acoustic models to speaker or environment. However, this approach has several limitations. It only performs compensation of the bias term of the hidden layer and therefore does not fully exploit the network capabilities. Moreover, it may not be well suited for certain types of architectures such as convolutional neural networks (CNNs) because the auxiliary features have different time-frequency structures from speech features. This paper resolves these problems by extending the recently proposed context adaptive DNN (CA-DNN) framework to CNN architectures. A CA-DNN is a DNN with one or several layers factorized in sub-layers associated with an acoustic context class representing speaker or environment. The output of the factorized layer is obtained as the weighted sum of the contributions of each sub-layer, weighted by acoustic context weights that are derived from auxiliary features such as i-vectors. Importantly, a CA-DNN can compensate both bias and weight matrices. In this paper, we investigate the use of CA-DNN for deep CNN-based architectures. We demonstrate consistent performance gains for utterance level rapid adaptation on the AURORA4 task over a strong network-in-network based deep CNN architecture.


DOI: 10.21437/Interspeech.2016-203

Cite as

Delcroix, M., Kinoshita, K., Ogawa, A., Yoshioka, T., Tran, D.T., Nakatani, T. (2016) Context Adaptive Neural Network for Rapid Adaptation of Deep CNN Based Acoustic Models. Proc. Interspeech 2016, 1573-1577.

Bibtex
@inproceedings{Delcroix+2016,
author={Marc Delcroix and Keisuke Kinoshita and Atsunori Ogawa and Takuya Yoshioka and Dung T. Tran and Tomohiro Nakatani},
title={Context Adaptive Neural Network for Rapid Adaptation of Deep CNN Based Acoustic Models},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-203},
url={http://dx.doi.org/10.21437/Interspeech.2016-203},
pages={1573--1577}
}