Interspeech'2005 - Eurospeech

Lisbon, Portugal
September 4-8, 2005

Myoelectric Signals for Multimodal Speech Recognition

Raghunandan S. Kumaran, Karthik Narayanan, John N. Gowdy

Clemson University, USA

A Coupled Hidden Markov Model (CHMM) is proposed in this paper to perform multimodal speech recognition using myoelectric signals (MES) from the muscles of vocal articulation. MES signals are immune to noise, and words that are acoustically similar manifest distinctly in MES. Hence, they would effectively complement the acoustic data in a multimodal speech recognition system. Research in Audio-Visual Speech Recognition has shown that CHMMs model the asynchrony between different data streams effectively. Hence, we propose CHMM for multimodal speech recognition using audio and MES as the two data streams. Our experiments indicate that the multimodal CHMM system significantly outperforms the audio only system at different SNRs. We have also provided a comparison between different features for MES and have found that wavelet features provide the best results.

Full Paper

Bibliographic reference.  Kumaran, Raghunandan S. / Narayanan, Karthik / Gowdy, John N. (2005): "Myoelectric signals for multimodal speech recognition", In INTERSPEECH-2005, 1189-1192.