The paper describes a neuroevolution-based novel approach to train recurrent neural networks that can process and classify audio directly from the raw waveform signal, without any assumption on the signal itself, on the features that should be extracted, or on the required network topology to perform the task. Resulting networks are relatively small in memory size, and their usage in a streaming fashion makes them particularly suited to embedded real-time applications.
Cite as: Daniel, A. (2017) Evolving Recurrent Neural Networks That Process and Classify Raw Audio in a Streaming Fashion. Proc. Interspeech 2017, 2040-2041
@inproceedings{daniel17_interspeech, author={Adrien Daniel}, title={{Evolving Recurrent Neural Networks That Process and Classify Raw Audio in a Streaming Fashion}}, year=2017, booktitle={Proc. Interspeech 2017}, pages={2040--2041} }