Self-Teaching Networks

Liang Lu, Eric Sun, Yifan Gong


We propose self-teaching networks to improve the generalization capacity of deep neural networks. The idea is to generate soft supervision labels using the output layer for training the lower layers of the network. During the network training, we seek an auxiliary loss that drives the lower layer to mimic the behavior of the output layer. The connection between the two network layers through the auxiliary loss can help the gradient flow, which works similar to the residual networks. Furthermore, the auxiliary loss also works as a regularizer, which improves the generalization capacity of the network. We evaluated the self-teaching network with deep recurrent neural networks on speech recognition tasks, where we trained the acoustic model using 30 thousand hours of data. We tested the acoustic model using data collected from 4 scenarios. We show that the self-teaching network can achieve consistent improvements and outperform existing methods such as label smoothing and confidence penalization.


 DOI: 10.21437/Interspeech.2019-1467

Cite as: Lu, L., Sun, E., Gong, Y. (2019) Self-Teaching Networks. Proc. Interspeech 2019, 2798-2802, DOI: 10.21437/Interspeech.2019-1467.


@inproceedings{Lu2019,
  author={Liang Lu and Eric Sun and Yifan Gong},
  title={{Self-Teaching Networks}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={2798--2802},
  doi={10.21437/Interspeech.2019-1467},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1467}
}