Automatic personality analysis has gained attention in the last years as a fundamental dimension in human-to-human and human-to-machine interaction. However, it still suffers from limited number and size of speech corpora for specific domains, such as the assessment of children’s personality. This paper investigates a semi-supervised training approach to tackle this scenario. We devise an experimental setup with age and language mismatch and two training sets: a small labeled training set from the Interspeech 2012 Personality Sub-challenge, containing French adult speech labeled with personality OCEAN traits, and a large unlabeled training set of Portuguese children’s speech. As test set, a corpus of Portuguese children’s speech labeled with OCEAN traits is used. Based on this setting, we investigate a weak supervision approach that iteratively refines an initial model trained with the labeled data-set using the unlabeled data-set. We also investigate knowledge-based features, which leverage expert knowledge in acoustic-prosodic cues and thus need no extra data. Results show that, despite the large mismatch imposed by language and age differences, it is possible to attain improvements with these techniques, pointing both to the benefits of using a weak supervision and expert-based acoustic-prosodic features across age and language.
Cite as: Solera-Ureña, R., Moniz, H., Batista, F., Cabarrão, V., Pompili, A., Astudillo, R.F., Campos, J., Paiva, A., Trancoso, I. (2017) A Semi-Supervised Learning Approach for Acoustic-Prosodic Personality Perception in Under-Resourced Domains. Proc. Interspeech 2017, 929-933, doi: 10.21437/Interspeech.2017-1732
@inproceedings{soleraurena17_interspeech, author={Rubén Solera-Ureña and Helena Moniz and Fernando Batista and Vera Cabarrão and Anna Pompili and Ramon Fernandez Astudillo and Joana Campos and Ana Paiva and Isabel Trancoso}, title={{A Semi-Supervised Learning Approach for Acoustic-Prosodic Personality Perception in Under-Resourced Domains}}, year=2017, booktitle={Proc. Interspeech 2017}, pages={929--933}, doi={10.21437/Interspeech.2017-1732} }