Interspeech'2005 - Eurospeech

Lisbon, Portugal
September 4-8, 2005

Linguistic and Acoustic Features Depending on Different Situations - The Experiments Considering Speech Recognition Rate

Shinya Yamada, Toshihiko Itoh, Kenji Araki

Hokkaido University, Japan

This paper presents the characteristic differences of linguistic and acoustic features observed in different spoken dialogue situations and with different dialogue partners: human-human vs. humanmachine interactions. We compare the linguistic and acoustic features of the user's speech to a spoken dialogue system and to a human operator in several goal setting and destination database searching tasks for a car navigation system. It has been pointed out that speech-based interaction has the potential to distract the driver's attention and degrade safety. On the other hand, it is not clear enough whether different dialogue situations and different dialogue partners cause any differences of linguistic or acoustic features on one's utterances in a speech interface system. Additionally, research about influence of speech recognition rate is not enough either. We collected a set of spoken dialogues by 12 subject speakers for each experiment under several dialogue situations. For a car driving situation, we prepared a virtual driving simulation system. We also prepared two patterns where we have two dialogue partners with different speech recognition rate (100% and about 80%). We analyzed the characteristic differences of user utterances caused by different dialogue situations and with different dialogue partners in two above mentioned patterns.

Full Paper

Bibliographic reference.  Yamada, Shinya / Itoh, Toshihiko / Araki, Kenji (2005): "Linguistic and acoustic features depending on different situations - the experiments considering speech recognition rate", In INTERSPEECH-2005, 3393-3396.