VoiceGuard: Secure and Private Speech Processing

Ferdinand Brasser, Tommaso Frassetto, Korbinian Riedhammer, Ahmad-Reza Sadeghi, Thomas Schneider, Christian Weinert


With the advent of smart-home devices providing voice-based interfaces, such as Amazon Alexa or Apple Siri, voice data is constantly transferred to cloud services for automated speech recognition or speaker verification. While this development enables intriguing new applications, it also poses significant risks: Voice data is highly sensitive since it contains biometric information of the speaker as well as the spoken words. This data may be abused if not protected properly, thus the security and privacy of billions of end-users is at stake. We tackle this challenge by proposing an architecture, dubbed VoiceGuard, that efficiently protects the speech processing task inside a trusted execution environment (TEE). Our solution preserves the privacy of users while at the same time it does not require the service provider to reveal model parameters. Our architecture can be extended to enable user-specific models, such as feature transformations (including fMLLR), i-vectors, or model transformations (e.g., custom output layers). It also generalizes to secure on-premise solutions, allowing vendors to securely ship their models to customers. We provide a proof-of-concept implementation and evaluate it on the Resource Management and WSJ speech recognition tasks isolated with Intel SGX, a widely available TEE implementation, demonstrating even real time processing capabilities.


 DOI: 10.21437/Interspeech.2018-2032

Cite as: Brasser, F., Frassetto, T., Riedhammer, K., Sadeghi, A., Schneider, T., Weinert, C. (2018) VoiceGuard: Secure and Private Speech Processing. Proc. Interspeech 2018, 1303-1307, DOI: 10.21437/Interspeech.2018-2032.


@inproceedings{Brasser2018,
  author={Ferdinand Brasser and Tommaso Frassetto and Korbinian Riedhammer and Ahmad-Reza Sadeghi and Thomas Schneider and Christian Weinert},
  title={VoiceGuard: Secure and Private Speech Processing},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={1303--1307},
  doi={10.21437/Interspeech.2018-2032},
  url={http://dx.doi.org/10.21437/Interspeech.2018-2032}
}