FAAVSP - The 1st Joint Conference on Facial Analysis, Animation, and
Auditory-Visual Speech Processing

Vienna, Austria
September 11-13, 2015

Face-Speech Sensor Fusion for Non-Invasive Stress Detection

Vasudev Bethamcherla(1), Will Paul (1), Cecilia Ovesdotter Alm (2), Reynold Bailey (1), Joe Geigel (1), Linwei Wang (1)

(1) Golisano College of Computing & Information Science; (2) College of Liberal Arts
Rochester Institute of Technology, Rochester, NY, USA

We describe a human-centered multimodal framework for automatically measuring cognitive changes. As a proof-ofconcept, we test our approach on the use case of stress detection. We contribute a method that combines non-intrusive behavioral analysis of facial expressions with speech data, enabling detection without the use of wearable devices. We compare these modalities’ effectiveness against galvanic skin response (GSR) collected simultaneously from the subject group using a wristband sensor. Data was collected with a modified version of the Stroop test, in which subjects perform the test both with and without the inclusion of stressors. Our study attempts to distinguish stressed and unstressed behaviors during constant cognitive load. The best improvement in accuracy over the majority class baseline was 38%, which was only 5% behind the best GSR result on the same data. This suggests that reliable markers of cognitive changes can be captured by behavioral data that are more suitable for group settings than wearable devices, and that combining modalities is beneficial.

Full Paper

Bibliographic reference.  Bethamcherl, Vasudev / Paul, Will / Alm, Cecilia Ovesdotter / Bailey, Reynold / Geigel, Joe / Wang, Linwei (2015): "Face-speech sensor fusion for non-invasive stress detection", In FAAVSP-2015, 196-201.