FAAVSP - The 1st Joint Conference on
Facial Analysis, Animation, and
We describe a human-centered multimodal framework for automatically measuring cognitive changes. As a proof-ofconcept, we test our approach on the use case of stress detection. We contribute a method that combines non-intrusive behavioral analysis of facial expressions with speech data, enabling detection without the use of wearable devices. We compare these modalities effectiveness against galvanic skin response (GSR) collected simultaneously from the subject group using a wristband sensor. Data was collected with a modified version of the Stroop test, in which subjects perform the test both with and without the inclusion of stressors. Our study attempts to distinguish stressed and unstressed behaviors during constant cognitive load. The best improvement in accuracy over the majority class baseline was 38%, which was only 5% behind the best GSR result on the same data. This suggests that reliable markers of cognitive changes can be captured by behavioral data that are more suitable for group settings than wearable devices, and that combining modalities is beneficial.
Bibliographic reference. Bethamcherl, Vasudev / Paul, Will / Alm, Cecilia Ovesdotter / Bailey, Reynold / Geigel, Joe / Wang, Linwei (2015): "Face-speech sensor fusion for non-invasive stress detection", In FAAVSP-2015, 196-201.