Cycle-Consistent Speech Enhancement

Zhong Meng, Jinyu Li, Yifan Gong, Biing-Hwang (Fred) Juang


Feature mapping using deep neural networks is an effective approach for single-channel speech enhancement. Noisy features are transformed to the enhanced ones through a mapping network and the mean square errors between the enhanced and clean features are minimized. In this paper, we propose a cycle-consistent speech enhancement (CSE) in which an additional inverse mapping network is introduced to reconstruct the noisy features from the enhanced ones. A cycle-consistent constraint is enforced to minimize the reconstruction loss. Similarly, a backward cycle of mappings is performed in the opposite direction with the same networks and losses. With cycle-consistency, the speech structure is well preserved in the enhanced features while noise is effectively reduced such that the feature mapping network generalizes better to unseen data. In cases where only unparalleled noisy and clean data is available for training, two discriminator networks are used to distinguish the reconstructed clean and noisy features from the real ones. The discrimination losses are jointly optimized with reconstruction losses through adversarial multi-task learning. Evaluated on the CHiME-3 dataset, the proposed CSE achieves 19.60% and 6.69% relative word error rate improvements respectively when using or without using parallel clean and noisy speech data.


 DOI: 10.21437/Interspeech.2018-2409

Cite as: Meng, Z., Li, J., Gong, Y., Juang, B.(. (2018) Cycle-Consistent Speech Enhancement. Proc. Interspeech 2018, 1165-1169, DOI: 10.21437/Interspeech.2018-2409.


@inproceedings{Meng2018,
  author={Zhong Meng and Jinyu Li and Yifan Gong and Biing-Hwang (Fred) Juang},
  title={Cycle-Consistent Speech Enhancement},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={1165--1169},
  doi={10.21437/Interspeech.2018-2409},
  url={http://dx.doi.org/10.21437/Interspeech.2018-2409}
}