Auditory-Visual Speech Processing (AVSP'99)

August 7-10, 1999
Santa Cruz, CA, USA

Synthetic Visual Speech Driven from Auditory Speech

Eva Agelfors, Jonas Beskow, Björn Granström, Magnus Lundeberg, Giampiero Salvi, Karl-Eric Spens, Tobias Öhman

Department of Speech, Music and Hearing, KTH, Stockholm, Sweden

We have developed two different methods for using auditory, telephone speech to drive the movements of a synthetic face. In the first method, Hidden Markov Models (HMMs) were trained on a phonetically transcribed telephone speech database. The output of the HMMs was then fed into a rule-based visual speech synthesizer as a string of phonemes together with time labels. In the second method, Artificial Neural Networks (ANNs) were trained on the same database to map acoustic parameters directly to facial control parameters. These target parameter trajectories were generated by using phoneme strings from a database as input to the visual speech synthesis The two methods were evaluated through audio-visual intelligibility tests with ten hearing impaired persons, and compared to "ideal" articulations (where no recognition was involved), a natural face, and to the intelligibility of the audio alone. It was found that the HMM method performs considerably better than the audio alone condition (54% and 34% keywords correct respectively), but not as well as the "ideal" articulating artificial face (64%). The intelligibility for the ANN method was 34% keywords correct.


Full Paper

Bibliographic reference.  Agelfors, Eva / Beskow, Jonas / Granström, Björn / Lundeberg, Magnus / Salvi, Giampiero / Spens, Karl-Eric / Öhman, Tobias (1999): "Synthetic visual speech driven from auditory speech", In AVSP-1999, paper #21.