Training Augmentation with Adversarial Examples for Robust Speech Recognition

Sining Sun, Ching-Feng Yeh, Mari Ostendorf, Mei-Yuh Hwang, Lei Xie


This paper explores the use of adversarial examples in training speech recognition systems to increase robustness of deep neural network acoustic models. During training, the fast gradient sign method is used to generate adversarial examples augmenting the original training data. Different from conventional data augmentation based on data transformations, the examples are dynamically generated based on current acoustic model parameters. We assess the impact of adversarial data augmentation in experiments on the Aurora-4 and CHiME-4 single-channel tasks, showing improved robustness against noise and channel variation. Further improvement is obtained when combining adversarial examples with teacher/student training, leading to a 23% relative word error rate reduction on Aurora-4.


 DOI: 10.21437/Interspeech.2018-1247

Cite as: Sun, S., Yeh, C., Ostendorf, M., Hwang, M., Xie, L. (2018) Training Augmentation with Adversarial Examples for Robust Speech Recognition. Proc. Interspeech 2018, 2404-2408, DOI: 10.21437/Interspeech.2018-1247.


@inproceedings{Sun2018,
  author={Sining Sun and Ching-Feng Yeh and Mari Ostendorf and Mei-Yuh Hwang and Lei Xie},
  title={Training Augmentation with Adversarial Examples for Robust Speech Recognition},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={2404--2408},
  doi={10.21437/Interspeech.2018-1247},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1247}
}