Black-box Attacks on Automatic Speaker Verification using Feedback-controlled Voice Conversion

Xiaohai Tian, Rohan Kumar Das, Haizhou Li


Automatic speaker verification (ASV) systems in practice are greatly vulnerable to spoofing attacks. The latest voice conversion technologies are able to produce perceptually natural sounding speech that mimics any target speakers. However, the perceptual closeness to a speaker's identity may not be enough to deceive an ASV system. In this work, we propose a framework that uses the output scores of an ASV system as the feedback to a voice conversion system. The attacker framework is a black-box adversary that steals one's voice identity, because it does not require any knowledge about the ASV system but the system outputs. Experimental results conducted on ASVspoof 2019 database confirm that the proposed feedback-controlled voice conversion framework produces adversarial samples that are more deceptive than the straightforward voice conversion, thereby boosting the impostor ASV scores. Further, the perceptual evaluation studies reveal that converted speech do not adversely affect the voice quality from the baseline system.


 DOI: 10.21437/Odyssey.2020-23

Cite as: Tian, X., Das, R.K., Li, H. (2020) Black-box Attacks on Automatic Speaker Verification using Feedback-controlled Voice Conversion. Proc. Odyssey 2020 The Speaker and Language Recognition Workshop, 159-164, DOI: 10.21437/Odyssey.2020-23.


@inproceedings{Tian2020,
  author={Xiaohai Tian and Rohan Kumar Das and Haizhou Li},
  title={{Black-box Attacks on Automatic Speaker Verification using Feedback-controlled Voice Conversion}},
  year=2020,
  booktitle={Proc. Odyssey 2020 The Speaker and Language Recognition Workshop},
  pages={159--164},
  doi={10.21437/Odyssey.2020-23},
  url={http://dx.doi.org/10.21437/Odyssey.2020-23}
}