INTERSPEECH 2012
13th Annual Conference of the International Speech Communication Association

Portland, OR, USA
September 9-13, 2012

Whole-Word Recognition from Articulatory Movements for Silent Speech Interfaces

Jun Wang (1,2,3), Ashok Samal (2), Jordan R. Green (1,3), Frank Rudzicz (4)

(1) Department of Special Education & Communication Disorders
(2) Department of Computer Science & Engineering, University of Nebraska-Lincoln, Lincoln, NE, USA
(3) Munroe-Meyer Institute, University of Nebraska Medical Center, Omaha, NE, USA
(4) Department of Computer Science, University of Toronto, Toronto, Ont., Canada

Articulation-based silent speech interfaces convert silently produced speech movements into audible words. These systems are still in their experimental stages, but have significant potential for facilitating oral communication in persons with laryngectomy or speech impairments. In this paper, we report the result of a novel, real-time algorithm that recognizes whole-words based on articulatory movements. This approach differs from prior work that has focused primarily on phoneme-level recognition based on articulatory features. On average, our algorithm missed 1.93 words in a sequence of twenty-five words with an average latency of 0.79 seconds for each word prediction using a data set of 5,500 isolated word samples collected from ten speakers. The results demonstrate the effectiveness of our approach and its potential for building a real-time articulation-based silent speech interface for health applications.

Index Terms: silent speech recognition, speech impairment, laryngectomy, support vector machine

Full Paper

Bibliographic reference.  Wang, Jun / Samal, Ashok / Green, Jordan R. / Rudzicz, Frank (2012): "Whole-word recognition from articulatory movements for silent speech interfaces", In INTERSPEECH-2012, 1327-1330.