ISCA Archive Interspeech 2015
ISCA Archive Interspeech 2015

Face reading from speech — predicting facial action units from audio cues

Fabien Ringeval, Erik Marchi, Marc Mehu, Klaus Scherer, Björn Schuller

The automatic recognition of facial behaviours is usually achieved through the detection of particular FACS Action Unit (AU), which then makes it possible to analyse the affective behaviours expressed in the face. Despite the fact that advanced techniques have been proposed to extract relevant facial descriptors, the processing of real-life data, i. e., recorded in unconstrained environments, makes the automatic detection of FACS AU much more challenging compared to constrained recordings, such as posed faces, and even impossible when the corresponding parts of the face are masked or subject to low or no illumination. We present in this paper the very first attempt in using acoustic cues for the automatic detection of FACS AU, as an alternative way to obtain information from the face when such data are not available. Results show that features extracted from the voice can be effectively used to predict different types of FACS AU, and that the best performance are obtained for the prediction of the apex, in comparison to the prediction of onset, offset and occurrence.

doi: 10.21437/Interspeech.2015-435

Cite as: Ringeval, F., Marchi, E., Mehu, M., Scherer, K., Schuller, B. (2015) Face reading from speech — predicting facial action units from audio cues. Proc. Interspeech 2015, 1977-1981, doi: 10.21437/Interspeech.2015-435

  author={Fabien Ringeval and Erik Marchi and Marc Mehu and Klaus Scherer and Björn Schuller},
  title={{Face reading from speech — predicting facial action units from audio cues}},
  booktitle={Proc. Interspeech 2015},