Who Said That?: Audio-Visual Speaker Diarisation of Real-World Meetings

Joon Son Chung, Bong-Jin Lee, Icksang Han


The goal of this work is to determine ‘who spoke when’ in real-world meetings. The method takes surround-view video and single or multi-channel audio as inputs, and generates robust diarisation outputs.

To achieve this, we propose a novel iterative approach that first enrolls speaker models using audio-visual correspondence, then uses the enrolled models together with the visual information to determine the active speaker.

We show strong quantitative and qualitative performance on a dataset of real-world meetings. The method is also evaluated on the public AMI meeting corpus, on which we demonstrate results that exceed all comparable methods. We also show that beamforming can be used together with the video to further improve the performance when multi-channel audio is available.


 DOI: 10.21437/Interspeech.2019-3116

Cite as: Chung, J.S., Lee, B., Han, I. (2019) Who Said That?: Audio-Visual Speaker Diarisation of Real-World Meetings. Proc. Interspeech 2019, 371-375, DOI: 10.21437/Interspeech.2019-3116.


@inproceedings{Chung2019,
  author={Joon Son Chung and Bong-Jin Lee and Icksang Han},
  title={{Who Said That?: Audio-Visual Speaker Diarisation of Real-World Meetings}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={371--375},
  doi={10.21437/Interspeech.2019-3116},
  url={http://dx.doi.org/10.21437/Interspeech.2019-3116}
}