Privacy-Preserving Adversarial Representation Learning in ASR: Reality or Illusion?

Brij Mohan Lal Srivastava, Aurélien Bellet, Marc Tommasi, Emmanuel Vincent


Automatic speech recognition (ASR) is a key technology in many services and applications. This typically requires user devices to send their speech data to the cloud for ASR decoding. As the speech signal carries a lot of information about the speaker, this raises serious privacy concerns. As a solution, an encoder may reside on each user device which performs local computations to anonymize the representation. In this paper, we focus on the protection of speaker identity and study the extent to which users can be recognized based on the encoded representation of their speech as obtained by a deep encoder-decoder architecture trained for ASR. Through speaker identification and verification experiments on the Librispeech corpus with open and closed sets of speakers, we show that the representations obtained from a standard architecture still carry a lot of information about speaker identity. We then propose to use adversarial training to learn representations that perform well in ASR while hiding speaker identity. Our results demonstrate that adversarial training dramatically reduces the closed-set classification accuracy, but this does not translate into increased open-set verification error hence into increased protection of the speaker identity in practice. We suggest several possible reasons behind this negative result.


 DOI: 10.21437/Interspeech.2019-2415

Cite as: Srivastava, B.M.L., Bellet, A., Tommasi, M., Vincent, E. (2019) Privacy-Preserving Adversarial Representation Learning in ASR: Reality or Illusion?. Proc. Interspeech 2019, 3700-3704, DOI: 10.21437/Interspeech.2019-2415.


@inproceedings{Srivastava2019,
  author={Brij Mohan Lal Srivastava and Aurélien Bellet and Marc Tommasi and Emmanuel Vincent},
  title={{Privacy-Preserving Adversarial Representation Learning in ASR: Reality or Illusion?}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={3700--3704},
  doi={10.21437/Interspeech.2019-2415},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2415}
}