4th International Conference on Spoken Language Processing
Philadelphia, PA, USA
Hyperarticulate speech to computers remains a poorly understood phenomenon, in spite of its association with elevated recognition errors. The present research analyzes the type and magnitude of linguistic adaptations that occur when people engage in error resolution with computers. A semi-automatic simulation method incorporating a novel error generation capability was used to collect speech data immediately before and after system recognition errors, and under conditions varying in error base-rates. Data on original and repeated spoken input, which were matched on speaker and lexical content, then were examined for type and magnitude of linguistic adaptations. Results indicated that speech during error resolution primarily was longer in duration, including both elongation of the speech segment and substantial relative increases in the number and duration of pauses. It also contained more clear speech phonological features and fewer spoken disfluencies. Implications of these findings are discussed for the development of more user-centered and robust error handling in next-generation systems.
Bibliographic reference. Oviatt, Sharon / Levow, Gina-Anne / MacEachern, Margaret / Kuhn, Karen (1996): "Modeling hyperarticulate speech during human-computer error resolution", In ICSLP-1996, 801-804.