EUROSPEECH 2003 - INTERSPEECH 2003
8th European Conference on Speech Communication and Technology

Geneva, Switzerland
September 1-4, 2003

        

Integrating Multilingual Articulatory Features into Speech Recognition

Sebastian Stuker (1), Florian Metze (1), Tanja Schultz (2), Alex Waibel (2)

(1) Universitšt Karlsruhe, Germany
(2) Carnegie Mellon University, USA

The use of articulatory features, such as place and manner of articulation, has been shown to reduce the word error rate of speech recognition systems under different conditions and in different settings. For example recognition systems based on features are more robust to noise and reverberation. In earlier work we showed that articulatory features can compensate for inter language variability and can be recognized across languages. In this paper we show that using cross- and multilingual detectors to support an HMM based speech recognition system significantly reduces the word error rate. By selecting and weighting the features in a discriminative way, we achieve an error rate reduction that lies in the same range as that seen when using language specific feature detectors. By combining feature detectors from many languages and training the weights discriminatively, we even outperform the case where only monolingual detectors are being used.

Full Paper

Bibliographic reference.  Stuker, Sebastian / Metze, Florian / Schultz, Tanja / Waibel, Alex (2003): "Integrating multilingual articulatory features into speech recognition", In EUROSPEECH-2003, 1033-1036.