Interspeech'2005 - Eurospeech

Lisbon, Portugal
September 4-8, 2005

Articulatory Synthesis Using Corpus-Based Estimation of Line Spectrum Pairs

Olov Engwall

KTH, Stockholm, Sweden

An attempt to define a new articulatory synthesis method, in which the speech signal is generated through a statistical estimation of its relation with articulatory parameters, is presented. A corpus containing acoustic material and simultaneous recordings of the tongue and facial movements was used to train and test the articulatory synthesis of VCV words and short sentences. Tongue and facial motion data, captured with electromagnetic articulography and three-dimensional optical motion tracking, respectively, define articulatory parameters of a talking head. These articulatory parameters are then used as estimators of the speech signal, represented by line spectrum pairs. The statistical link between the articulatory parameters and the speech signal was established using either linear estimation or artificial neural networks. The results show that the linear estimation was only enough to synthesize identifiable vowels, but not consonants, whereas the neural networks gave a perceptually better synthesis.

Full Paper

Bibliographic reference.  Engwall, Olov (2005): "Articulatory synthesis using corpus-based estimation of line spectrum pairs", In INTERSPEECH-2005, 1909-1912.