Learning Acoustic Word Embeddings with Temporal Context for Query-by-Example Speech Search

Yougen Yuan, Cheung-Chi Leung, Lei Xie, Hongjie Chen, Bin Ma, Haizhou Li


We propose to learn acoustic word embeddings with temporal context for query-by-example (QbE) speech search. The temporal context includes the leading and trailing word sequences of a word. We assume that there exist spoken word pairs in the training database. We pad the word pairs with their original temporal context to form fixed-length speech segment pairs. We obtain the acoustic word embeddings through a deep convolutional neural network (CNN) which is trained on the speech segment pairs with a triplet loss. By shifting a fixed-length analysis window through the search content, we obtain a running sequence of embeddings. In this way, searching for the spoken query is equivalent to the matching of acoustic word embeddings. The experiments show that our proposed acoustic word embeddings learned with temporal context are effective in QbE speech search. They outperform the state-of-the-art frame-level feature representations and reduce run-time computation since no dynamic time warping is required in QbE speech search. We also find that it is important to have sufficient speech segment pairs to train the deep CNN for effective acoustic word embeddings.


 DOI: 10.21437/Interspeech.2018-1010

Cite as: Yuan, Y., Leung, C., Xie, L., Chen, H., Ma, B., Li, H. (2018) Learning Acoustic Word Embeddings with Temporal Context for Query-by-Example Speech Search. Proc. Interspeech 2018, 97-101, DOI: 10.21437/Interspeech.2018-1010.


@inproceedings{Yuan2018,
  author={Yougen Yuan and Cheung-Chi Leung and Lei Xie and Hongjie Chen and Bin Ma and Haizhou Li},
  title={Learning Acoustic Word Embeddings with Temporal Context for Query-by-Example Speech Search},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={97--101},
  doi={10.21437/Interspeech.2018-1010},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1010}
}