ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

Exploring Targeted Universal Adversarial Perturbations to End-to-End ASR Models

Zhiyun Lu, Wei Han, Yu Zhang, Liangliang Cao

Although end-to-end automatic speech recognition (e2e ASR) models are widely deployed in many applications, there have been very few studies to understand models’ robustness against adversarial perturbations. In this paper, we explore whether a targeted universal perturbation vector exists for e2e ASR models. Our goal is to find perturbations that can mislead the models to predict the given targeted transcript such as “thank you” or empty string on any input utterance. We study two different attacks, namely additive and prepending perturbations, and their performances on the state-of-the-art LAS, CTC and RNN-T models. We find that LAS is the most vulnerable to perturbations among the three models. RNN-T is more robust against additive perturbations, especially on long utterances. And CTC is robust against both additive and prepending perturbations. To attack RNN-T, we find prepending perturbation is more effective than the additive perturbation, and can mislead the models to predict the same short target on utterances of arbitrary length.


doi: 10.21437/Interspeech.2021-1668

Cite as: Lu, Z., Han, W., Zhang, Y., Cao, L. (2021) Exploring Targeted Universal Adversarial Perturbations to End-to-End ASR Models. Proc. Interspeech 2021, 3460-3464, doi: 10.21437/Interspeech.2021-1668

@inproceedings{lu21c_interspeech,
  author={Zhiyun Lu and Wei Han and Yu Zhang and Liangliang Cao},
  title={{Exploring Targeted Universal Adversarial Perturbations to End-to-End ASR Models}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={3460--3464},
  doi={10.21437/Interspeech.2021-1668}
}