INTERSPEECH 2014
15th Annual Conference of the International Speech Communication Association

Singapore
September 14-18, 2014

Opti-Speech: A Real-Time, 3D Visual Feedback System for Speech Training

William Katz, Thomas F. Campbell, Jun Wang, Eric Farrar, J. Coleman Eubanks, Arvind Balasubramanian, Balakrishnan Prabhakaran, Rob Rennaker

University of Texas at Dallas, USA

We describe an interactive 3D system to provide talkers with real-time information concerning their tongue and jaw movements during speech. Speech movement is tracked by a magnetometer system (Wave; NDI, Waterloo, Ontario, Canada). A customized interface allows users to view their current tongue position (represented as an avatar consisting of flesh-point markers and a modeled surface) placed in a synchronously moving, transparent head. Subjects receive augmented visual feedback when tongue sensors achieve the correct place of articulation. Preliminary data obtained for a group of adult talkers suggest this system can be used to reliably provide real-time feedback for American English consonant place of articulation targets. Future studies, including tests with communication disordered subjects, are described.

Full Paper

Bibliographic reference.  Katz, William / Campbell, Thomas F. / Wang, Jun / Farrar, Eric / Eubanks, J. Coleman / Balasubramanian, Arvind / Prabhakaran, Balakrishnan / Rennaker, Rob (2014): "Opti-speech: a real-time, 3d visual feedback system for speech training", In INTERSPEECH-2014, 1174-1178.