ISCA Archive Interspeech 2013
ISCA Archive Interspeech 2013

Template-warping based speech driven head motion synthesis

David Adam Braude, Hiroshi Shimodaira, Atef Ben-Youssef

We propose a method for synthesising head motion from speech using a combination of an Input-Output Markov model (IOMM) and Gaussian mixture models trained in a supervised manner. A key difference of this approach compared to others is to model the head motion in each angle as a series of templates of motion rather than trying to recover a frame-wise function. The templates were chosen to reflect natural patterns in the head motion, and states for the IOMM were chosen based on statistics of the templates. This reduces the search space for the trajectories and stops impossible motions such as discontinuities from being possible. For synthesis our system warps the templates to account for the acoustic features and the other angles' warping parameters. We show our system is capable of recovering the statistics of the motion that were chosen for the states. Our system was then compared to a baseline that used a frame-wise mapping that is based on previously published work. A subjective preference test that includes multiple speakers showed participants have a preference for the segment based approach. Both of these systems were trained on storytelling free speech.


doi: 10.21437/Interspeech.2013-633

Cite as: Braude, D.A., Shimodaira, H., Ben-Youssef, A. (2013) Template-warping based speech driven head motion synthesis. Proc. Interspeech 2013, 2763-2767, doi: 10.21437/Interspeech.2013-633

@inproceedings{braude13_interspeech,
  author={David Adam Braude and Hiroshi Shimodaira and Atef Ben-Youssef},
  title={{Template-warping based speech driven head motion synthesis}},
  year=2013,
  booktitle={Proc. Interspeech 2013},
  pages={2763--2767},
  doi={10.21437/Interspeech.2013-633}
}