ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

Learning Speech Models from Multi-Modal Data

Karen Livescu

Speech is usually recorded as an acoustic signal, but it often appears in context with other signals. In addition to the acoustic signal, we may have available a corresponding visual scene, the video of the speaker, physiological signals such as the speaker’s movements or neural recordings, or other related signals. It is often possible to learn a better speech model or representation by considering the context provided by these additional signals, or to learn with less training data. Typical approaches to training from multi-modal data are based on the idea that models or representations of each modality should be in some sense predictive of the other modalities. Multi-modal approaches can also take advantage of the fact that the sources of noise or nuisance variables are different in different measurement modalities, so an additional (non-acoustic) modality can help learn a speech representation that suppresses such noise. This talk will survey several lines of work in this area, both older and newer. It will cover some basic techniques from machine learning and statistics, as well as specific models and applications for speech.

Cite as: Livescu, K. (2021) Learning Speech Models from Multi-Modal Data. Proc. Interspeech 2021

  author={Karen Livescu},
  title={{Learning Speech Models from Multi-Modal Data}},
  booktitle={Proc. Interspeech 2021}