Multi-Span Acoustic Modelling Using Raw Waveform Signals

P. von Platen, Chao Zhang, P.C. Woodland


Traditional automatic speech recognition (ASR) systems often use an acoustic model (AM) built on handcrafted acoustic features, such as log Mel-filter bank (FBANK) values. Recent studies found that AMs with convolutional neural networks (CNNs) can directly use the raw waveform signal as input. Given sufficient training data, these AMs can yield a competitive word error rate (WER) to those built on FBANK features. This paper proposes a novel multi-span structure for acoustic modelling based on the raw waveform with multiple streams of CNN input layers, each processing a different span of the raw waveform signal. Evaluation on both the single channel CHiME4 and AMI data sets show that multi-span AMs give a lower WER than FBANK AMs by an average of about 5% (relative). Analysis of the trained multi-span model reveals that the CNNs can learn filters that are rather different to the log Mel-filters. Furthermore, the paper shows that a widely used single span raw waveform AM can be improved by using a smaller CNN kernel size and increased stride to yield improved WERs.


 DOI: 10.21437/Interspeech.2019-2454

Cite as: Platen, P.V., Zhang, C., Woodland, P. (2019) Multi-Span Acoustic Modelling Using Raw Waveform Signals. Proc. Interspeech 2019, 1393-1397, DOI: 10.21437/Interspeech.2019-2454.


@inproceedings{Platen2019,
  author={P. von Platen and Chao Zhang and P.C. Woodland},
  title={{Multi-Span Acoustic Modelling Using Raw Waveform Signals}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={1393--1397},
  doi={10.21437/Interspeech.2019-2454},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2454}
}