Auditory-Visual Speech Processing 2007 (AVSP2007)
Kasteel Groenendaal, Hilvarenbeek, The Netherlands
Talking heads and virtual characters, able to communicate complex information with human-like expressiveness and naturalness, should be able to display emotional facial expressions. In recent years several works, both rule-based and statistical, have obtained important results in the modelling of emotional facial expressions to be used in synthetic Talking Heads. However, most of the rule-based systems suffer from the drawback of static generation, due to the fact that the set of rules and their combinations are limited. On the other hand, most of works on synthesis using statistical approaches are restricted to speech and lip movements.
This paper presents a modelling of the dynamics of emotional facial expressions based on a hybrid statistical/machine learning approach. This approach combines Hidden Markov Models (HMMs) and Recurrent Neural Networks (RNNs), aiming at benefiting from the advantages of both paradigms and overcoming their own limitations.
Bibliographic reference. Mana, Nadia / Pianesi, Fabio (2007): "Modelling of emotional facial expressions during speech in synthetic talking heads using a hybrid approach", In AVSP-2007, paper P29.