5th International Conference on Spoken Language Processing

Sydney, Australia
November 30 - December 4, 1998

Development of CAI System Employing Synthesized Speech Responses

Tsubasa Shinozaki, Masanobu Abe

NTT Human Interface Labs., Japan

This paper proposes a Computer Assisted Instruction (CAI) system that teaches students how to write Japanese characters. The most important feature of the system is the usage of synthesized speech to interact with users. The CAI system has a video display tablet interface. A user traces a pattern of a character using the tablet pen, and simultaneously his tracing is shown on the display. When the trace line is outside the pattern, the system simultaneously outputs synthesized speech to correct the errors. To design strategies for generating instructions, behavior and instruction messages of a human teacher were recorded and analyzed. One of the most interesting challenges of the system is a function that changes the "personality" of the teacher, such as a strict teacher, a friendly teacher, and a short-tempered teacher. According to the experimental results, it was confirmed that the proposed system makes it possible to convey a particular impression using synthesized speech.

Full Paper

Sound Example #1   Sound Example #2   Sound Example #3
The instruction messages (fourteen in total) of each set are the synthesized speech whose prosodic parameters are manually determined to express good mood, bad mood, and neutral mood, respectively (good mood [Example #1], bad mood [Example #2], neutral mood [Example #3]).

Bibliographic reference.  Shinozaki, Tsubasa / Abe, Masanobu (1998): "Development of CAI system employing synthesized speech responses", In ICSLP-1998, paper 0409.