ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

Y2-Net FCRN for Acoustic Echo and Noise Suppression

Ernst Seidel, Jan Franzen, Maximilian Strake, Tim Fingscheidt

In recent years, deep neural networks (DNNs) were studied as an alternative to traditional acoustic echo cancellation (AEC) algorithms. The proposed models achieved remarkable performance for the separate tasks of AEC and residual echo suppression (RES). A promising network topology is a fully convolutional recurrent network (FCRN) structure, which has already proven its performance on both noise suppression and AEC tasks, individually. However, the combination of AEC, postfiltering, and noise suppression to a single network typically leads to a noticeable decline in the quality of the near-end speech component due to the lack of a separate loss for echo estimation. In this paper, we propose a two-stage model (Y2-Net) which consists of two FCRNs, each with two inputs and one output (Y-Net). The first stage (AEC) yields an echo estimate, which — as a novelty for a DNN AEC model — is further used by the second stage to perform RES and noise suppression. While the subjective listening test of the Interspeech 2021 AEC Challenge mostly yielded results close to the baseline, the proposed method scored an average improvement of 0.46 points over the baseline on the blind testset in double-talk on the instrumental metric DECMOS, provided by the challenge organizers.


doi: 10.21437/Interspeech.2021-1590

Cite as: Seidel, E., Franzen, J., Strake, M., Fingscheidt, T. (2021) Y2-Net FCRN for Acoustic Echo and Noise Suppression. Proc. Interspeech 2021, 4763-4767, doi: 10.21437/Interspeech.2021-1590

@inproceedings{seidel21_interspeech,
  author={Ernst Seidel and Jan Franzen and Maximilian Strake and Tim Fingscheidt},
  title={{Y2-Net FCRN for Acoustic Echo and Noise Suppression}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={4763--4767},
  doi={10.21437/Interspeech.2021-1590}
}