INTERSPEECH 2015
16th Annual Conference of the International Speech Communication Association

Dresden, Germany
September 6-10, 2015

BLSTM Neural Networks for Speech Driven Head Motion Synthesis

Chuang Ding, Pengcheng Zhu, Lei Xie

Northwestern Polytechnical University, China

Head motion naturally occurs in synchrony with speech and carries important intention, attitude and emotion factors. This paper aims to synthesize head motions from natural speech for talking avatar applications. Specifically, we study the feasibility of learning speech-to-head-motion regression models by two types of popular neural networks, i.e., feed-forward and bidirectional long short-term memory (BLSTM). We discover that the BLSTM networks apparently outperform the feed-forward ones in this task because of their capacity of learning long-range speech dynamics. More interestingly, we observe that stacking different networks, i.e., inserting a feed-forward layer into two BLSTM layers, achieves the best performance. Subjective evaluation shows that this hybrid network can produce more plausible head motions from speech.

Full Paper

Bibliographic reference.  Ding, Chuang / Zhu, Pengcheng / Xie, Lei (2015): "BLSTM neural networks for speech driven head motion synthesis", In INTERSPEECH-2015, 3345-3349.