In this work we present a new multi-modal database for analysis of participant behaviors in dyadic interactions. This database contains multiple channels with close- and far-field audio, a high definition camera array and motion capture data. Presence of the motion capture allows precise analysis of the body language low-level descriptors and its comparison with similar descriptors derived from video data. Data is manually labeled by multiple human annotators using psychology-informed guides. This work also presents an initial analysis of approach-avoidance (A-A) behavior. Two sets of annotations are provided, one based on video only and the other obtained by using both the audio and video channels. Additionally, we describe the statistics of interaction descriptors and A-A labels on participants' roles. Finally we provide an analysis of relations between various non-verbal features and approach/avoidance labels.
Bibliographic reference. Rozgić, Viktor / Xiao, Bo / Katsamanis, Athanasios / Baucom, Brian R. / Georgiou, Panayiotis G. / Narayanan, Shrikanth S. (2010): "A new multichannel multi modal dyadic interaction database", In INTERSPEECH-2010, 1982-1985.