Recently, speaker adaptation methods in deep neural networks (DNNs) have been widely studied for automatic speech recognition. However, almost all adaptation methods for DNNs have to consider various heuristic conditions such as mini-batch sizes, learning rate scheduling, stopping criteria, and initialization conditions because of the inherent property of a stochastic gradient descent (SGD)-based training process. Unfortunately, those heuristic conditions are hard to be properly tuned. To alleviate those difficulties, in this paper, we propose a least squares regression-based speaker adaptation method in a DNN framework utilizing posterior mean of each class. Also, we show how the proposed method can provide a unique solution which is quite easy and fast to calculate without SGD. The proposed method was evaluated in the TED-LIUM corpus. Experimental results showed that the proposed method achieved up to a 4.6% relative improvement against a speaker independent DNN. In addition, we report further performance improvement of the proposed method with speaker-adapted features.
Cite as: Kim, Y., Lim, H., Goo, J., Kim, H. (2017) Deep Least Squares Regression for Speaker Adaptation. Proc. Interspeech 2017, 729-733, doi: 10.21437/Interspeech.2017-783
@inproceedings{kim17c_interspeech, author={Younggwan Kim and Hyungjun Lim and Jahyun Goo and Hoirin Kim}, title={{Deep Least Squares Regression for Speaker Adaptation}}, year=2017, booktitle={Proc. Interspeech 2017}, pages={729--733}, doi={10.21437/Interspeech.2017-783} }