Semantic Lattice Processing in Contextual Automatic Speech Recognition for Google Assistant

Leonid Velikovich, Ian Williams, Justin Scheiner, Petar Aleksic, Pedro Moreno, Michael Riley


Recent interest in intelligent assistants has increased demand for Automatic Speech Recognition (ASR) systems that can utilize contextual information to adapt to the user's preferences or the current device state. For example, a user might be more likely to refer to their favorite songs when giving a "music playing" command or request to watch a movie starring a particular favorite actor when giving a "movie playing" command. Similarly, when a device is in a "music playing" state, a user is more likely to give volume control commands. In this paper, we explore using semantic information inside the ASR word lattice by employing Named Entity Recognition (NER) to identify and boost contextually relevant paths in order to improve speech recognition accuracy. We use broad semantic classes comprising millions of entities, such as songs and musical artists, to tag relevant semantic entities in the lattice. We show that our method reduces Word Error Rate (WER) by 12.0% relative on a Google Assistant "media playing" commands test set, while not affecting WER on a test set containing commands unrelated to media.


 DOI: 10.21437/Interspeech.2018-2453

Cite as: Velikovich, L., Williams, I., Scheiner, J., Aleksic, P., Moreno, P., Riley, M. (2018) Semantic Lattice Processing in Contextual Automatic Speech Recognition for Google Assistant. Proc. Interspeech 2018, 2222-2226, DOI: 10.21437/Interspeech.2018-2453.


@inproceedings{Velikovich2018,
  author={Leonid Velikovich and Ian Williams and Justin Scheiner and Petar Aleksic and Pedro Moreno and Michael Riley},
  title={Semantic Lattice Processing in Contextual Automatic Speech Recognition for Google Assistant},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={2222--2226},
  doi={10.21437/Interspeech.2018-2453},
  url={http://dx.doi.org/10.21437/Interspeech.2018-2453}
}