15th Annual Conference of the International Speech Communication Association

September 14-18, 2014

Simple Gesture-Based Error Correction Interface for Smartphone Speech Recognition

Yuan Liang (1), Koji Iwano (2), Koichi Shinoda (1)

(1) Tokyo Institute of Technology, Japan
(2) Tokyo City University, Japan

Conventional error correction interfaces for speech recognition require a user to first mark an error region and choose the correct word from a candidate list. Taking the user's effort and the limited user interface available in a smartphone use into account, this operation should be simpler. In this paper, we propose an interface where users mark the error region once, and then the word will be replaced by another candidate. Assuming that the words preceding/succeeding the error region are validated by the user, we search the Web n-grams for long word sequences matched to such a context. The acoustic features of the error region are also utilized to rerank the candidate words. The experimental result proved the effectiveness of our method. 30.2% of the error words were corrected by a single operation.

Full Paper

Bibliographic reference.  Liang, Yuan / Iwano, Koji / Shinoda, Koichi (2014): "Simple gesture-based error correction interface for smartphone speech recognition", In INTERSPEECH-2014, 1194-1198.