Multichannel Loss Function for Supervised Speech Source Separation by Mask-Based Beamforming

Yoshiki Masuyama, Masahito Togami, Tatsuya Komatsu


In this paper, we propose two mask-based beamforming methods using a deep neural network (DNN) trained by multichannel loss functions. Beamforming technique using time-frequency (TF)-masks estimated by a DNN have been applied to many applications where TF-masks are used for estimating spatial covariance matrices. To train a DNN for mask-based beamforming, loss functions designed for monaural speech enhancement/separation have been employed. Although such a training criterion is simple, it does not directly correspond to the performance of mask-based beamforming. To overcome this problem, we use multichannel loss functions which evaluate the estimated spatial covariance matrices based on the multichannel Itakura–Saito divergence. DNNs trained by the multichannel loss functions can be applied to construct several beamformers. Experimental results confirmed their effectiveness and robustness to microphone configurations.


 DOI: 10.21437/Interspeech.2019-1289

Cite as: Masuyama, Y., Togami, M., Komatsu, T. (2019) Multichannel Loss Function for Supervised Speech Source Separation by Mask-Based Beamforming. Proc. Interspeech 2019, 2708-2712, DOI: 10.21437/Interspeech.2019-1289.


@inproceedings{Masuyama2019,
  author={Yoshiki Masuyama and Masahito Togami and Tatsuya Komatsu},
  title={{Multichannel Loss Function for Supervised Speech Source Separation by Mask-Based Beamforming}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={2708--2712},
  doi={10.21437/Interspeech.2019-1289},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1289}
}