Third International Conference on Spoken Language Processing (ICSLP 94)

Yokohama, Japan
September 18-22, 1994

Text-To-Speech in the Speech Training of the Deaf: Adapting Models to Individual Speakers

Hector Javkin (1), Elizabeth Keate (1), Norma Antonanzas (1), Ranjun Zou (1), Karen Youdelman (2)

(1) Speech Technology Laboratory, Panasonic Technologies, Inc., Santa Barbara, CA, USA
(2) Lexington Center Inc., Jackson Heights, NY, USA

Computer-based speech training systems for deaf children provide, in addition to feedback from the children's speech production, acoustic and articulatory models for the children to imitate, either from pre-recorded utterences or from the teacher's on-line speech. It would be beneficial if such training systems functioned without a teacher and beyond what can be taught with pre-recorded models. Speech training might also be more successful if the models were adapted to the child's individual mode of producing speech sounds. Our system is designed to supplement the time that students have with teachers and to provide models optimized for a student's preferred mode of articulation by using a form of text-to-speech (TTS) to synthesize teaching models that are based on the student's own production. Although we are currently focusing on customizing tongue-palate contact data, it is applicable to other aspects of computer-based teaching.

Full Paper

Bibliographic reference.  Javkin, Hector / Keate, Elizabeth / Antonanzas, Norma / Zou, Ranjun / Youdelman, Karen (1994): "Text-to-speech in the speech training of the deaf: adapting models to individual speakers", In ICSLP-1994, 1959-1962.