In this paper, we propose the novel interface for powered wheelchair control using the acoustic-based recognition of head gesture accompanying speech. A microphone array mounted on a wheelchair localizes the position of the user's voice. Because the localized position of the user's voice almost corresponds with that of the mouth, the tracking of the head movements accompanying speech can be achieved by means of the microphone array. The proposed interface does not require disabled people to wear any microphones or utter recognizable voice commands, but requires only two capabilities: the ability to move the head and the ability to utter an arbitrary sound. In our preliminary experiments, five subjects performed six kinds of head gestures accompanying speech. The head gestures of each subject were recognized using the models trained from the other subjects' data. The average recognition accuracy was 99.7%.
Bibliographic reference. Sasou, Akira (2011): "Powered wheelchair control using acoustic-based recognition of head gesture accompanying speech", In INTERSPEECH-2011, 3029-3032.