Incorporating Symbolic Sequential Modeling for Speech Enhancement

Chien-Feng Liao, Yu Tsao, Xugang Lu, Hisashi Kawai


In a noisy environment, a lossy speech signal can be automatically restored by a listener if he/she knows the language well. That is, with the built-in knowledge of a “language model”, a listener may effectively suppress noise interference and retrieve the target speech signals. Accordingly, we argue that familiarity with the underlying linguistic content of spoken utterances benefits speech enhancement (SE) in noisy environments. In this study, in addition to the conventional modeling for learning the acoustic noisy-clean speech mapping, an abstract symbolic sequential modeling is incorporated into the SE framework. This symbolic sequential modeling can be regarded as a “linguistic constraint” in learning the acoustic noisy-clean speech mapping function. In this study, the symbolic sequences for acoustic signals are obtained as discrete representations with a Vector Quantized Variational Autoencoder algorithm. The obtained symbols are able to capture high-level phoneme-like content from speech signals. The experimental results demonstrate that the proposed framework can obtain notable performance improvement in terms of perceptual evaluation of speech quality (PESQ) and short-time objective intelligibility (STOI) on the TIMIT dataset.


 DOI: 10.21437/Interspeech.2019-1777

Cite as: Liao, C., Tsao, Y., Lu, X., Kawai, H. (2019) Incorporating Symbolic Sequential Modeling for Speech Enhancement. Proc. Interspeech 2019, 2733-2737, DOI: 10.21437/Interspeech.2019-1777.


@inproceedings{Liao2019,
  author={Chien-Feng Liao and Yu Tsao and Xugang Lu and Hisashi Kawai},
  title={{Incorporating Symbolic Sequential Modeling for Speech Enhancement}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={2733--2737},
  doi={10.21437/Interspeech.2019-1777},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1777}
}