Sixth International Conference on Spoken Language Processing
(ICSLP 2000)

Beijing, China
October 16-20, 2000

Automatic Speech Recognition Using Dynamic Bayesian Networks with Both Acoustic and Articulatory Variables

Todd A. Stephenson (1,2), Hervé Bourlard (1,2), Samy Bengio (1), Andrew C. Morris (1)

(1) Dalle Molle Institute for Perceptual Artificial Intelligence (IDIAP), Martigny, Switzerland
(2) Swiss Federal Institute of Technology at Lausanne (EPFL), Lausanne, Switzerland

Current technology for automatic speech recognition (ASR) uses hidden Markov models (HMMs) that recognize spoken speech using the acoustic signal. However, no use is made of the causes of the acoustic signal: the articulators. We present here a dynamic Bayesian network (DBN) model that utilizes an additional variable for representing the state of the articulators. A particular strength of the system is that, while it uses measured articulatory data during its training, it does not need to know these values during recognition. As Bayesian networks are not used often in the speech community, we give an introduction to them. After describing how they can be used in ASR, we present a system to do isolated word recognition using articulatory information. Recognition results are given, showing that a system with both acoustics and inferred articulatory positions performs better than a system with only acoustics.


Full Paper

Bibliographic reference.  Stephenson, Todd A. / Bourlard, Hervé / Bengio, Samy / Morris, Andrew C. (2000): "Automatic speech recognition using dynamic bayesian networks with both acoustic and articulatory variables", In ICSLP-2000, vol.2, 951-954.