Video-Driven Speech Reconstruction Using Generative Adversarial Networks

Konstantinos Vougioukas, Pingchuan Ma, Stavros Petridis, Maja Pantic


Speech is a means of communication which relies on both audio and visual information. The absence of one modality can often lead to confusion or misinterpretation of information. In this paper we present an end-to-end temporal model capable of directly synthesising audio from silent video, without needing to transform to-and-from intermediate features. Our proposed approach, based on GANs is capable of producing natural sounding, intelligible speech which is synchronised with the video. The performance of our model is evaluated on the GRID dataset for both speaker dependent and speaker independent scenarios. To the best of our knowledge this is the first method that maps video directly to raw audio and the first to produce intelligible speech when tested on previously unseen speakers. We evaluate the synthesised audio not only based on the sound quality but also on the accuracy of the spoken words.


 DOI: 10.21437/Interspeech.2019-1445

Cite as: Vougioukas, K., Ma, P., Petridis, S., Pantic, M. (2019) Video-Driven Speech Reconstruction Using Generative Adversarial Networks. Proc. Interspeech 2019, 4125-4129, DOI: 10.21437/Interspeech.2019-1445.


@inproceedings{Vougioukas2019,
  author={Konstantinos Vougioukas and Pingchuan Ma and Stavros Petridis and Maja Pantic},
  title={{Video-Driven Speech Reconstruction Using Generative Adversarial Networks}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={4125--4129},
  doi={10.21437/Interspeech.2019-1445},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1445}
}