Interpretable Deep Learning Model for the Detection and Reconstruction of Dysarthric Speech

Daniel Korzekwa, Roberto Barra-Chicote, Bozena Kostek, Thomas Drugman, Mateusz Lajszczak


We present a novel deep learning model for the detection and reconstruction of dysarthric speech. We train the model with a multi-task learning technique to jointly solve dysarthria detection and speech reconstruction tasks. The model key feature is a low-dimensional latent space that is meant to encode the properties of dysarthric speech. It is commonly believed that neural networks are black boxes that solve problems but do not provide interpretable outputs. On the contrary, we show that this latent space successfully encodes interpretable characteristics of dysarthria, is effective at detecting dysarthria, and that manipulation of the latent space allows the model to reconstruct healthy speech from dysarthric speech. This work can help patients and speech pathologists to improve their understanding of the condition, lead to more accurate diagnoses and aid in reconstructing healthy speech for afflicted patients.


 DOI: 10.21437/Interspeech.2019-1206

Cite as: Korzekwa, D., Barra-Chicote, R., Kostek, B., Drugman, T., Lajszczak, M. (2019) Interpretable Deep Learning Model for the Detection and Reconstruction of Dysarthric Speech. Proc. Interspeech 2019, 3890-3894, DOI: 10.21437/Interspeech.2019-1206.


@inproceedings{Korzekwa2019,
  author={Daniel Korzekwa and Roberto Barra-Chicote and Bozena Kostek and Thomas Drugman and Mateusz Lajszczak},
  title={{Interpretable Deep Learning Model for the Detection and Reconstruction of Dysarthric Speech}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={3890--3894},
  doi={10.21437/Interspeech.2019-1206},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1206}
}