Frequency Estimation from Waveforms Using Multi-Layered Neural Networks

Prateek Verma, Ronald W. Schafer


For frequency estimation in noisy speech or music signals, time domain methods based on signal processing techniques such as autocorrelation or average magnitude difference, often do not perform well. As deep neural networks (DNNs) have become feasible, some researchers have attempted with some success to improve the performance of signal processing based methods by learning on autocorrelation, Fourier transform or constant-Q filter bank based representations. In our approach, blocks of signal samples are input directly to a neural network to perform end to end learning. The emergence of sub-harmonic structure in the posterior vector of the output layer, along with analysis of the filter-like structures emerging in the DNN shows strong correlations with some signal processing based approaches. These NNs appear to learn a nonlinearly-spaced frequency representation in the first layer followed by comb-like filters. We find that learning representations from raw time-domain signals can achieve performance on par with the current state of the art algorithms for frequency estimation in noisy and polyphonic settings. The emergence of sub-harmonic structure in the posterior vector suggests that existing post-processing techniques such as harmonic product spectra and salience mapping may further improve the performance.


DOI: 10.21437/Interspeech.2016-679

Cite as

Verma, P., Schafer, R.W. (2016) Frequency Estimation from Waveforms Using Multi-Layered Neural Networks. Proc. Interspeech 2016, 2165-2169.

Bibtex
@inproceedings{Verma+2016,
author={Prateek Verma and Ronald W. Schafer},
title={Frequency Estimation from Waveforms Using Multi-Layered Neural Networks},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-679},
url={http://dx.doi.org/10.21437/Interspeech.2016-679},
pages={2165--2169}
}