Automatic Detection of Autism Spectrum Disorder in Children Using Acoustic and Text Features from Brief Natural Conversations

Sunghye Cho, Mark Liberman, Neville Ryant, Meredith Cola, Robert T. Schultz, Julia Parish-Morris


Autism Spectrum Disorder (ASD) is increasingly prevalent [1], but long waitlists hinder children’s access to expedient diagnosis and treatment. To begin addressing this problem, we developed an automated system to detect ASD using acoustic and text features drawn from short, unstructured conversations with naïve conversation partners (confederates). Seventy children (35 with ASD and 35 typically developing (TD)) discussed a range of generic topics (e.g., pets, family, hobbies, and sports) with confederates for approximately 5 minutes. A total of 624 features (352 acoustic + 272 text) were incorporated into a Gradient Boosting Model. To reduce dimensionality and avoid overfitting, we dropped insignificant features and applied feature reduction using Principal Component Analysis. Our final model was accurate substantially above chance levels. Predictive features were both acoustic-phonetic and lexical, from both participants and confederates. The goal of this project is to develop an automatic detection system for ASD that relies on very brief, generic, and natural conversations, which can eventually be used for ASD prescreening and triage in real-world settings such as doctor’s offices and schools.


 DOI: 10.21437/Interspeech.2019-1452

Cite as: Cho, S., Liberman, M., Ryant, N., Cola, M., Schultz, R.T., Parish-Morris, J. (2019) Automatic Detection of Autism Spectrum Disorder in Children Using Acoustic and Text Features from Brief Natural Conversations. Proc. Interspeech 2019, 2513-2517, DOI: 10.21437/Interspeech.2019-1452.


@inproceedings{Cho2019,
  author={Sunghye Cho and Mark Liberman and Neville Ryant and Meredith Cola and Robert T. Schultz and Julia Parish-Morris},
  title={{Automatic Detection of Autism Spectrum Disorder in Children Using Acoustic and Text Features from Brief Natural Conversations}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={2513--2517},
  doi={10.21437/Interspeech.2019-1452},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1452}
}