Ultrasound-Based Silent Speech Interface Built on a Continuous Vocoder

Tamás Gábor Csapó, Mohammed Salah Al-Radhi, Géza Németh, Gábor Gosztolya, Tamás Grósz, László Tóth, Alexandra Markó


Recently it was shown that within the Silent Speech Interface (SSI) field, the prediction of F0 is possible from Ultrasound Tongue Images (UTI) as the articulatory input, using Deep Neural Networks for articulatory-to-acoustic mapping. Moreover, text-to-speech synthesizers were shown to produce higher quality speech when using a continuous pitch estimate, which takes non-zero pitch values even when voicing is not present. Therefore, in this paper on UTI-based SSI, we use a simple continuous F0 tracker which does not apply a strict voiced /unvoiced decision. Continuous vocoder parameters (ContF0, Maximum Voiced Frequency and Mel-Generalized Cepstrum) are predicted using a convolutional neural network, with UTI as input. The results demonstrate that during the articulatory-to-acoustic mapping experiments, the continuous F0 is predicted with lower error, and the continuous vocoder produces slightly more natural synthesized speech than the baseline vocoder using standard discontinuous F0.


 DOI: 10.21437/Interspeech.2019-2046

Cite as: Csapó, T.G., Al-Radhi, M.S., Németh, G., Gosztolya, G., Grósz, T., Tóth, L., Markó, A. (2019) Ultrasound-Based Silent Speech Interface Built on a Continuous Vocoder. Proc. Interspeech 2019, 894-898, DOI: 10.21437/Interspeech.2019-2046.


@inproceedings{Csapó2019,
  author={Tamás Gábor Csapó and Mohammed Salah Al-Radhi and Géza Németh and Gábor Gosztolya and Tamás Grósz and László Tóth and Alexandra Markó},
  title={{Ultrasound-Based Silent Speech Interface Built on a Continuous Vocoder}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={894--898},
  doi={10.21437/Interspeech.2019-2046},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2046}
}