Evolving Learning for Analysing Mood-Related Infant Vocalisation

Zixing Zhang, Jing Han, Kun Qian, Björn Schuller


Infant vocalisation analysis plays an important role in the study of the development of pre-speech capability of infants, while machine-based approaches nowadays emerge with an aim to advance such an analysis. However, conventional machine learning techniques require heavy feature-engineering and refined architecture designing. In this paper, we present an evolving learning framework to automate the design of neural network structures for infant vocalisation analysis. In contrast to manually searching by trial and error, we aim to automate the search process in a given space with less interference. This framework consists of a controller and its child networks, where the child networks are built according to the controller's estimation. When applying the framework to the Interspeech 2018 Computational Paralinguistics (ComParE) Crying Sub-challenge, we discover several deep recurrent neural network structures, which are able to deliver competitive results to the best ComParE baseline method.


 DOI: 10.21437/Interspeech.2018-1914

Cite as: Zhang, Z., Han, J., Qian, K., Schuller, B. (2018) Evolving Learning for Analysing Mood-Related Infant Vocalisation. Proc. Interspeech 2018, 142-146, DOI: 10.21437/Interspeech.2018-1914.


@inproceedings{Zhang2018,
  author={Zixing Zhang and Jing Han and Kun Qian and Björn Schuller},
  title={Evolving Learning for Analysing Mood-Related Infant Vocalisation},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={142--146},
  doi={10.21437/Interspeech.2018-1914},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1914}
}