When dysarthrics, individuals with speaking disabilities, try to communicate using speech, they often have no choice but to use speech synthesizers which require them to type word symbols or sound symbols. Input by this method often makes real-time communication troublesome and dysarthric users struggle to have smooth flowing conversations. In this study, we are developing a novel speech synthesizer where speech is generated through hand motions rather than symbol input. In recent years, statistical voice conversion techniques have been proposed based on space mapping between given parallel utterances. By applying these methods, a hand space was mapped to a vowel space and a converter from hand motions to vowel transitions was developed. It reported that the proposed method is effective enough to generate the five Japanese vowels. In this paper, we discuss the expansion of this system to consonant generation. In order to create the gestures for consonants, a Speech-to-Hand conversion system is firstly developed using parallel data for vowels, in which consonants are not included. Then, we are able to automatically search for candidates for consonant gestures for a Hand-to-Speech system.
Bibliographic reference. Kunikoshi, Aki / Qiao, Yu / Saito, Daisuke / Minematsu, Nobuaki / Hirose, Keikichi (2011): "Gesture design of hand-to-speech converter derived from speech-to-hand converter based on probabilistic integration model", In INTERSPEECH-2011, 3025-3028.