Bandwidth Embeddings for Mixed-Bandwidth Speech Recognition

Gautam Mantena, Ozlem Kalinli, Ossama Abdel-Hamid, Don McAllaster


In this paper, we tackle the problem of handling narrowband and wideband speech by building a single acoustic model (AM), also called mixed bandwidth AM. In the proposed approach, an auxiliary input feature is used to provide the bandwidth information to the model, and bandwidth embeddings are jointly learned as part of acoustic model training. Experimental evaluations show that using bandwidth embeddings helps the model to handle the variability of the narrow and wideband speech, and makes it possible to train a mixed-bandwidth AM. Furthermore, we propose to use parallel convolutional layers to handle the mismatch between the narrow and wideband speech better, where separate convolution layers are used for each type of input speech signal. Our best system achieves 13% relative improvement on narrowband speech, while not degrading on wideband speech.


 DOI: 10.21437/Interspeech.2019-2589

Cite as: Mantena, G., Kalinli, O., Abdel-Hamid, O., McAllaster, D. (2019) Bandwidth Embeddings for Mixed-Bandwidth Speech Recognition. Proc. Interspeech 2019, 3203-3207, DOI: 10.21437/Interspeech.2019-2589.


@inproceedings{Mantena2019,
  author={Gautam Mantena and Ozlem Kalinli and Ossama Abdel-Hamid and Don McAllaster},
  title={{Bandwidth Embeddings for Mixed-Bandwidth Speech Recognition}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={3203--3207},
  doi={10.21437/Interspeech.2019-2589},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2589}
}