ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

Variational Auto-Encoder Based Variability Encoding for Dysarthric Speech Recognition

Xurong Xie, Rukiye Ruzi, Xunying Liu, Lan Wang

Dysarthric speech recognition is a challenging task due to acoustic variability and limited amount of available data. Diverse conditions of dysarthric speakers account for the acoustic variability, which make the variability difficult to be modeled precisely. This paper presents a variational auto-encoder based variability encoder (VAEVE) to explicitly encode such variability for dysarthric speech. The VAEVE makes use of both phoneme information and low-dimensional latent variable to reconstruct the input acoustic features, thereby the latent variable is forced to encode the phoneme-independent variability. Stochastic gradient variational Bayes algorithm is applied to model the distribution for generating variability encodings, which are further used as auxiliary features for DNN acoustic modeling. Experiment results conducted on the UASpeech corpus show that the VAEVE based variability encodings have complementary effect to the learning hidden unit contributions (LHUC) speaker adaptation. The systems using variability encodings consistently outperform the comparable baseline systems without using them, and obtain absolute word error rate (WER) reduction by up to 2.2% on dysarthric speech with “Very low” intelligibility level, and up to 2% on the “Mixed” type of dysarthric speech with diverse or uncertain conditions.


doi: 10.21437/Interspeech.2021-173

Cite as: Xie, X., Ruzi, R., Liu, X., Wang, L. (2021) Variational Auto-Encoder Based Variability Encoding for Dysarthric Speech Recognition. Proc. Interspeech 2021, 4808-4812, doi: 10.21437/Interspeech.2021-173

@inproceedings{xie21b_interspeech,
  author={Xurong Xie and Rukiye Ruzi and Xunying Liu and Lan Wang},
  title={{Variational Auto-Encoder Based Variability Encoding for Dysarthric Speech Recognition}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={4808--4812},
  doi={10.21437/Interspeech.2021-173}
}