Speech Enhancement with Wide Residual Networks in Reverberant Environments

Jorge Llombart, Dayana Ribas, Antonio Miguel, Luis Vicente, Alfonso Ortega, Eduardo Lleida


This paper proposes a speech enhancement method which exploits the high potential of residual connections in a Wide Residual Network architecture. This is supported on single dimensional convolutions computed alongside the time domain, which is a powerful approach to process contextually correlated representations through the temporal domain, such as speech feature sequences. We find the residual mechanism extremely useful for the enhancement task since the signal always has a linear shortcut and the non-linear path enhances it in several steps by adding or subtracting corrections. The enhancement capability of the proposal is assessed by objective quality metrics evaluated with simulated and real samples of reverberated speech signals. Results show that the proposal outperforms the state-of-the-art method called WPE, which is known to effectively reduce reverberation and greatly enhance the signal. The proposed model, trained with artificial synthesized reverberation data, was able to generalize to real room impulse responses for a variety of conditions (e.g. different room sizes, RT60, near & far field). Furthermore, it achieves accuracy for real speech with reverberation from two different datasets.


 DOI: 10.21437/Interspeech.2019-1745

Cite as: Llombart, J., Ribas, D., Miguel, A., Vicente, L., Ortega, A., Lleida, E. (2019) Speech Enhancement with Wide Residual Networks in Reverberant Environments. Proc. Interspeech 2019, 1811-1815, DOI: 10.21437/Interspeech.2019-1745.


@inproceedings{Llombart2019,
  author={Jorge Llombart and Dayana Ribas and Antonio Miguel and Luis Vicente and Alfonso Ortega and Eduardo Lleida},
  title={{Speech Enhancement with Wide Residual Networks in Reverberant Environments}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={1811--1815},
  doi={10.21437/Interspeech.2019-1745},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1745}
}