This paper describes our first step for advances in human-machine interactive systems for in-vehicle environments of the UTDrive project. UTDrive is part of an on-going international collaboration to collect and research rich multi-modal data recorded for modeling behavior while the driver is interacting with speech-activated systems or performing other secondary tasks. A simultaneous second goal is to better understand speech characteristics of the driver undergoing additional cognitive load since dialog systems are generally not formulated for high task-stress environment (e.g., driving a vehicle). The corpus consists of audio, video, brake/gas pedal pressure, forward distance, GPS information, and CAN-Bus information. The resulting corpus, analysis, and modeling will contribute to more effective speech systems which are able to sense driver cognitive distraction/stress and adapt itself to the driver's cognitive capacity and driving situations for improved safety while driving.
Bibliographic reference. Angkititrakul, Pongtep / Kwak, DongGu / Choi, SangJo / Kim, JeongHee / PhucPhan, Anh / Sathyanarayana, Amardeep / Hansen, John H. L. (2007): "Getting start with UTDrive: driver-behavior modeling and assessment of distraction for in-vehicle speech systems", In INTERSPEECH-2007, 1334-1337.