Neural Network Adaptive Beamforming for Robust Multichannel Speech Recognition

Bo Li, Tara N. Sainath, Ron J. Weiss, Kevin W. Wilson, Michiel Bacchiani


Joint multichannel enhancement and acoustic modeling using neural networks has shown promise over the past few years. However, one shortcoming of previous work [1, 2, 3] is that the filters learned during training are fixed for decoding, potentially limiting the ability of these models to adapt to previously unseen or changing conditions. In this paper we explore a neural network adaptive beamforming (NAB) technique to address this issue. Specifically, we use LSTM layers to predict time domain beamforming filter coefficients at each input frame. These filters are convolved with the framed time domain input signal and summed across channels, essentially performing FIR filter-and-sum beamforming using the dynamically adapted filter. The beamformer output is passed into a waveform CLDNN acoustic model [4] which is trained jointly with the filter prediction LSTM layers. We find that the proposed NAB model achieves a 12.7% relative improvement in WER over a single channel model [4] and reaches similar performance to a “factored” model architecture which utilizes several fixed spatial filters [3] on a 2,000-hour Voice Search task, with a 17.9% decrease in computational cost.


DOI: 10.21437/Interspeech.2016-173

Cite as

Li, B., Sainath, T.N., Weiss, R.J., Wilson, K.W., Bacchiani, M. (2016) Neural Network Adaptive Beamforming for Robust Multichannel Speech Recognition. Proc. Interspeech 2016, 1976-1980.

Bibtex
@inproceedings{Li+2016,
author={Bo Li and Tara N. Sainath and Ron J. Weiss and Kevin W. Wilson and Michiel Bacchiani},
title={Neural Network Adaptive Beamforming for Robust Multichannel Speech Recognition},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-173},
url={http://dx.doi.org/10.21437/Interspeech.2016-173},
pages={1976--1980}
}