Physically Constrained Statistical F0 Prediction for Electrolaryngeal Speech Enhancement

Kou Tanaka, Hirokazu Kameoka, Tomoki Toda, Satoshi Nakamura


Electrolaryngeal (EL) speech produced by a laryngectomee using an electrolarynx to mechanically generate artificial excitation sounds severely suffers from unnatural fundamental frequency (F0) patterns caused by monotonic excitation sounds. To address this issue, we have previously proposed EL speech enhancement systems using statistical F0 pattern prediction methods based on a Gaussian Mixture Model (GMM), making it possible to predict the underlying F0 pattern of EL speech from its spectral feature sequence. Our previous work revealed that the naturalness of the predicted F0 pattern can be improved by incorporating a physically based generative model of F0 patterns into the GMM-based statistical F0 prediction system within a Product-of-Expert framework. However, one drawback of this method is that it requires an iterative procedure to obtain a predicted F0 pattern, making it difficult to realize a real-time system. In this paper, we propose yet another approach to physically based statistical F0 pattern prediction by using a HMM-GMM framework. This approach is noteworthy in that it allows to generate an F0 pattern that is both statistically likely and physically natural without iterative procedures. Experimental results demonstrated that the proposed method was capable of generating F0 patterns more similar to those in normal speech than the conventional GMM-based method.


 DOI: 10.21437/Interspeech.2017-688

Cite as: Tanaka, K., Kameoka, H., Toda, T., Nakamura, S. (2017) Physically Constrained Statistical F0 Prediction for Electrolaryngeal Speech Enhancement. Proc. Interspeech 2017, 1069-1073, DOI: 10.21437/Interspeech.2017-688.


@inproceedings{Tanaka2017,
  author={Kou Tanaka and Hirokazu Kameoka and Tomoki Toda and Satoshi Nakamura},
  title={Physically Constrained Statistical F0 Prediction for Electrolaryngeal Speech Enhancement},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={1069--1073},
  doi={10.21437/Interspeech.2017-688},
  url={http://dx.doi.org/10.21437/Interspeech.2017-688}
}