ISCA Archive Interspeech 2005
ISCA Archive Interspeech 2005

Non-parametric speaker turn segmentation of meeting data

Petr Motlícek, Lukás Burget, Jan Cernocký

An extension of conventional speaker segmentation framework is presented for a scenario in which a number of microphones record the activity of speakers present at a meeting (one microphone per speaker). Although each microphone can receive speech from both the participant wearing the microphone (local speech) and other participants (cross-talk), the recorded audio can be broadly classified in three ways: local speech, cross-talk, and silence. This paper proposes a technique which takes into account cross-correlations, values of its maxima, and energy differences as features to identify and segment speaker turns. In particular, we have used classical cross-correlation functions, time smoothing and in part temporal constraints to sharpen and disambiguate timing differences between microphone channels that may be dominated by noise and reverberation. Experimental results show that proposed technique can be successively used for speaker segmentation of data collected from a number of different setups.

doi: 10.21437/Interspeech.2005-190

Cite as: Motlícek, P., Burget, L., Cernocký, J. (2005) Non-parametric speaker turn segmentation of meeting data. Proc. Interspeech 2005, 657-660, doi: 10.21437/Interspeech.2005-190

  author={Petr Motlícek and Lukás Burget and Jan Cernocký},
  title={{Non-parametric speaker turn segmentation of meeting data}},
  booktitle={Proc. Interspeech 2005},