Latent Dirichlet Allocation Based Acoustic Data Selection for Automatic Speech Recognition

Mortaza Doulaty, Thomas Hain


Selecting in-domain data from a large pool of diverse and out-of-domain data is a non-trivial problem. In most cases simply using all of the available data will lead to sub-optimal and in some cases even worse performance compared to carefully selecting a matching set. This is true even for data-inefficient neural models. Acoustic Latent Dirichlet Allocation (aLDA) is shown to be useful in a variety of speech technology related tasks, including domain adaptation of acoustic models for automatic speech recognition and entity labeling for information retrieval. In this paper we propose to use aLDA as a data similarity criterion in a data selection framework. Given a large pool of out-of-domain and potentially mismatched data, the task is to select the best-matching training data to a set of representative utterances sampled from a target domain. Our target data consists of around 32 hours of meeting data (both far-field and close-talk) and the pool contains 2k hours of meeting, talks, voice search, dictation, command-and-control, audio books, lectures, generic media and telephony speech data. The proposed technique for training data selection, significantly outperforms random selection, posterior-based selection as well as using all of the available data.


 DOI: 10.21437/Interspeech.2019-1797

Cite as: Doulaty, M., Hain, T. (2019) Latent Dirichlet Allocation Based Acoustic Data Selection for Automatic Speech Recognition. Proc. Interspeech 2019, 3228-3232, DOI: 10.21437/Interspeech.2019-1797.


@inproceedings{Doulaty2019,
  author={Mortaza Doulaty and Thomas Hain},
  title={{Latent Dirichlet Allocation Based Acoustic Data Selection for Automatic Speech Recognition}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={3228--3232},
  doi={10.21437/Interspeech.2019-1797},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1797}
}