ESCA Workshop on Audio-Visual Speech Processing (AVSP'97)
September 26-27, 1997
It is envisioned that autonomous software agents that can communicate using speech and gesture will soon be on everybody's computer screen. This paper describes an architecture that can be used to design and animate characters capable of lip-synchronised synthetic speech as well as body gestures, for use in for example spoken dialogue systems. A general scheme for computationally efficient parametric deformation of facial surfaces is presented, as well as techniques for generation of bimodal speech, facial expressions and body gestures in a spoken dialogue system. Results indicating that an animated cartoon-like character can be a significant contribution to speech intelligibility, are also reported.
Bibliographic reference. Beskow, Jonas (1997): "Animation of talking agents", In AVSP-1997, 149-152.