Combining CNN and BLSTM to Extract Textual and Acoustic Features for Recognizing Stances in Mandarin Ideological Debate Competition

Linchuan Li, Zhiyong Wu, Mingxing Xu, Helen Meng, Lianhong Cai


Recognizing stances in ideological debates is a relatively new and challenging problem in opinion mining. While previous work mainly focused on text modality, in this paper, we try to recognize stances from both text and acoustic modalities, where how to derive more representative textual and acoustic features still remains the research problem. Inspired by the promising performances of neural network models in natural language understanding and speech processing, we propose a unified framework named C-BLSTM by combining convolutional neural network (CNN) and bidirectional long short-term memory (BLSTM) recurrent neural network (RNN) for feature extraction. In C-BLSTM, CNN is utilized to extract higher-level local features of text (n-grams) and speech (emphasis, intonation), while BLSTM is used to extract bottleneck features for context-sensitive feature compression and target-related feature representation. Maximum entropy model is then used to recognize stances from the bimodal textual acoustic bottleneck features. Experiments on four debate datasets show C-BLSTM outperforms all challenging baseline methods, and specifically, acoustic intonation and emphasis features further improve F1-measure by 6% as compared to textual features only.


DOI: 10.21437/Interspeech.2016-324

Cite as

Li, L., Wu, Z., Xu, M., Meng, H., Cai, L. (2016) Combining CNN and BLSTM to Extract Textual and Acoustic Features for Recognizing Stances in Mandarin Ideological Debate Competition. Proc. Interspeech 2016, 1392-1396.

Bibtex
@inproceedings{Li+2016,
author={Linchuan Li and Zhiyong Wu and Mingxing Xu and Helen Meng and Lianhong Cai},
title={Combining CNN and BLSTM to Extract Textual and Acoustic Features for Recognizing Stances in Mandarin Ideological Debate Competition},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-324},
url={http://dx.doi.org/10.21437/Interspeech.2016-324},
pages={1392--1396}
}