Towards Debugging Deep Neural Networks by Generating Speech Utterances

Bilal Soomro, Anssi Kanervisto, Trung Ngo Trong, Ville Hautamäki


Deep neural networks (DNN) are able to successfully process and classify speech utterances. However, understanding the reason behind a classification by DNN is difficult. One such debugging method used with image classification DNNs is activation maximization, which generates example-images that are classified as one of the classes. In this work, we evaluate applicability of this method to speech utterance classifiers as the means to understanding what DNN “listens to”. We trained a classifier using the speech command corpus and then use activation maximization to pull samples from the trained model. Then we synthesize audio from features using WaveNet vocoder for subjective analysis. We measure the quality of generated samples by objective measurements and crowd-sourced human evaluations. Results show that when combined with the prior of natural speech, activation maximization can be used to generate examples of different classes. Based on these results, activation maximization can be used to start opening up the DNN black-box in speech tasks.


 DOI: 10.21437/Interspeech.2019-2339

Cite as: Soomro, B., Kanervisto, A., Trong, T.N., Hautamäki, V. (2019) Towards Debugging Deep Neural Networks by Generating Speech Utterances. Proc. Interspeech 2019, 3213-3217, DOI: 10.21437/Interspeech.2019-2339.


@inproceedings{Soomro2019,
  author={Bilal Soomro and Anssi Kanervisto and Trung Ngo Trong and Ville Hautamäki},
  title={{Towards Debugging Deep Neural Networks by Generating Speech Utterances}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={3213--3217},
  doi={10.21437/Interspeech.2019-2339},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2339}
}