Learning an acoustic model directly from the raw waveform has been an active area of research. However, waveform-based models have not yet matched the performance of log-mel trained neural networks. We will show that raw waveform features match the performance of log-mel filterbank energies when used with a state-of-the-art CLDNN acoustic model trained on over 2,000 hours of speech. Specifically, we will show the benefit of the CLDNN, namely the time convolution layer in reducing temporal variations, the frequency convolution layer for preserving locality and reducing frequency variations, as well as the LSTM layers for temporal modeling. In addition, by stacking raw waveform features with log-mel features, we achieve a 3% relative reduction in word error rate.
Bibliographic reference. Sainath, Tara N. / Weiss, Ron J. / Senior, Andrew / Wilson, Kevin W. / Vinyals, Oriol (2015): "Learning the speech front-end with raw waveform CLDNNs", In INTERSPEECH-2015, 1-5.