11th Annual Conference of the International Speech Communication Association

Makuhari, Chiba, Japan
September 26-30. 2010

Automatic Classification of Married Couples' Behavior Using Audio Features

Matthew Black (1), Athanasios Katsamanis (1), Chi-Chun Lee (1), Adam C. Lammert (1), Brian R. Baucom (1), Andrew Christensen (2), Panayiotis G. Georgiou (1), Shrikanth S. Narayanan (1)

(1) University of Southern California, USA
(2) University of California at Los Angeles, USA

In this work, we analyzed a 96-hour corpus of married couples spontaneously interacting about a problem in their relationship. Each spouse was manually coded with relevant session-level perceptual observations (e.g., level of blame toward other spouse, global positive affect), and our goal was to classify the spouses' behavior using features derived from the audio signal. Based on automatic segmentation, we extracted prosodic/spectral features to capture global acoustic properties for each spouse. We then trained gender-specific classifiers to predict the behavior of each spouse for six codes. We compare performance for the various factors (across codes, gender, classifier type, and feature type) and discuss future work for this novel and challenging corpus.

Full Paper

Bibliographic reference.  Black, Matthew / Katsamanis, Athanasios / Lee, Chi-Chun / Lammert, Adam C. / Baucom, Brian R. / Christensen, Andrew / Georgiou, Panayiotis G. / Narayanan, Shrikanth S. (2010): "Automatic classification of married couples' behavior using audio features", In INTERSPEECH-2010, 2030-2033.