In this project, we intended to develop techniques for multimodal emotion detection, one modality being brain signals via fNIRS, the second modality being face video and the third modality being the scalp EEG signals. EEG and fNIRS provided us with an “internal” look at the emotion generation processes, while video sequence gave us an “external” look on the “same” phenomenon.
Fusions of fNIRS with video and of EEG with fNIRS were considered. Fusion of all three modalities was not considered due to the extensive noise on the EEG signals caused by facial muscle movements, which are required for emotion detection from video sequences.
Besides the techniques mentioned above, peripheral signals, namely, respiration, cardiac rate, and galvanic skin resistance were also measured from the subjects during “fNIRS + EEG” recordings. These signals provided us with extra information about the emotional state of the subjects.
The critical point in the success of this project was to be able to build a “good” database. Good data acquisition means synchronous data and requires the definition of some specific experimental protocols for emotions elicitation. Thus, we devoted much of our time to data acquisition throughout the workshop, which resulted in a large enough database for making the first analyses. Results presented in this report should be considered as preliminary. However, they are promising enough to extend the scope of the research.
Index Terms: Emotion detection, EEG, video, near-infrared spectroscopy
Cite as: Savran, A., Ciftci, K., Chanel, G., Cruz Mota, J., Viet, L.H., Sankur, B., Akarun, L., Caplier, A., Rombaut, M. (2006) Emotion detection in the loop from brain signals and facial images. Proc. Summer Workshop on Multimodal Interfaces (eINTERFACE 2006), 69-80
@inproceedings{savran06_einterface, author={Arman Savran and Koray Ciftci and Guillame Chanel and Javier {Cruz Mota} and Luong Hong Viet and Bülent Sankur and Lale Akarun and Alice Caplier and Michele Rombaut}, title={{Emotion detection in the loop from brain signals and facial images}}, year=2006, booktitle={Proc. Summer Workshop on Multimodal Interfaces (eINTERFACE 2006)}, pages={69--80} }