ISCA Archive Interspeech 2013
ISCA Archive Interspeech 2013

A new language independent, photo-realistic talking head driven by voice only

Xinjian Zhang, Lijuan Wang, Gang Li, Frank Seide, Frank K. Soong

We propose a new photo-realistic, voice driven only (i.e. no linguistic info of the voice input is needed) talking head. The core of the new talking head is a context-dependent, multi-layer, Deep Neural Network (DNN), which is discriminatively trained over hundreds of hours, speaker independent speech data. The trained DNN is then used to map acoustic speech input to 9,000 tied "senone" states probabilistically. For each photo-realistic talking head, an HMM-based lips motion synthesizer is trained over the speaker's audio/visual training data where states are statistically mapped to the corresponding lips images. In test, for given speech input, DNN predicts the likely states in their posterior probabilities and photo-realistic lips animation is then rendered through the DNN predicted state lattice. The DNN trained on English, speaker independent data has also been tested with other language input, e.g. Mandarin, Spanish, etc. to mimic the lips movements cross-lingually. Subjective experiments show that lip motions thus rendered for 15 non-English languages are highly synchronized with the audio input and photo-realistic to human eyes perceptually.


doi: 10.21437/Interspeech.2013-629

Cite as: Zhang, X., Wang, L., Li, G., Seide, F., Soong, F.K. (2013) A new language independent, photo-realistic talking head driven by voice only. Proc. Interspeech 2013, 2743-2747, doi: 10.21437/Interspeech.2013-629

@inproceedings{zhang13d_interspeech,
  author={Xinjian Zhang and Lijuan Wang and Gang Li and Frank Seide and Frank K. Soong},
  title={{A new language independent, photo-realistic talking head driven by voice only}},
  year=2013,
  booktitle={Proc. Interspeech 2013},
  pages={2743--2747},
  doi={10.21437/Interspeech.2013-629}
}