EUROSPEECH '97
5th European Conference on Speech Communication and Technology

Rhodes, Greece
September 22-25, 1997


Integration of Eye Fixation Information with Speech Recognition Systems

Ramesh R. Sarukkai, Craig Hunter

Dept. of Computer Science University of Rochester Rochester, NY, USA

In this paper, a semi-tight coupling between visual and auditory modalities is proposed: in particular, eye fixation information is used to enhance the output of speech recognition systems. This is achieved by treating natural human eye fixations as diectic references to symbolic objects, and passing on this information to the speech recognizer. The speech recognizer biases its search towards these set of symbols/words during the best word sequence search process. As an illustrative example, the TRAINS interactive planning assistant system has been used as a test-bed; eye-fixations provide important cues to city names which the user sees on the map. Experimental results indicate that eye fixations help reduce speech recognition errors. This work suggests that integrating information from different interfaces to bootstrap each other would enable the development of reliable and robust interactive multi-modal human- computer systems.

Full Paper

Bibliographic reference.  Sarukkai, Ramesh R. / Hunter, Craig (1997): "Integration of eye fixation information with speech recognition systems", In EUROSPEECH-1997, 1639-1643.