EUROSPEECH 2001 Scandinavia
7th European Conference on Speech Communication and Technology
2nd INTERSPEECH Event

Aalborg, Denmark
September 3-7, 2001

                 

Real-Time Multiple Speaker Tracking by Multi-Modal Integration for Mobile Robots

Kazuhiro Nakadai (1), Ken-ichi Hidai (1), Hiroshi G. Okuno (2), Hiroaki Kitano (3)

(1) Japan Science and Technology Corp., Japan
(2) Kyoto University, Japan
(3) Sony Computer Science Laboratories, Japan

In this paper, real-time multiple speaker tracking is addressed, because it is essential in robot perception and human-robot social interaction. The difficulty lies in treating a mixture of sounds, occlusion (some talkers are hidden) and real-time processing. Our approach consists of three components; (1) the extraction of the direction of each speaker by using interaural phase difference and interaural intensity difference, (2) the resolution of each speaker's direction by multi-modal integration of audition, vision and motion with canceling inevitable motor noises in motion in case of an unseen or silent speaker, and (3) the distributed implementation to three PCs connected by TCP/IP network to attain realtime processing. As a result, we attain robust real-time speaker tracking with 200 ms delay in a non-anechoic room, even when multiple speakers exist and the tracking person is visually occluded.

Full Paper

Bibliographic reference.  Nakadai, Kazuhiro / Hidai, Ken-ichi / Okuno, Hiroshi G. / Kitano, Hiroaki (2001): "Real-time multiple speaker tracking by multi-modal integration for mobile robots", In EUROSPEECH-2001, 1193-1196.