ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

Fusion of Embeddings Networks for Robust Combination of Text Dependent and Independent Speaker Recognition

Ruirui Li, Chelsea J.-T. Ju, Zeya Chen, Hongda Mao, Oguz Elibol, Andreas Stolcke

By implicitly recognizing a user based on his/her speech input, speaker identification enables many downstream applications, such as personalized system behavior and expedited shopping checkouts. Based on whether the speech content is constrained or not, both text-dependent (TD) and text-independent (TI) speaker recognition models may be used. We wish to combine the advantages of both types of models through an ensemble system to make more reliable predictions. However, any such combined approach has to be robust to incomplete inputs, i.e., when either TD or TI input is missing. As a solution we propose a fusion of embeddings network ( foenet) architecture, combining joint learning with neural attention. We compare foenet with four competitive baseline methods on a dataset of voice assistant inputs, and show that it achieves higher accuracy than the baseline and score fusion methods, especially in the presence of incomplete inputs.


doi: 10.21437/Interspeech.2021-3

Cite as: Li, R., Ju, C.J.-T., Chen, Z., Mao, H., Elibol, O., Stolcke, A. (2021) Fusion of Embeddings Networks for Robust Combination of Text Dependent and Independent Speaker Recognition. Proc. Interspeech 2021, 4593-4597, doi: 10.21437/Interspeech.2021-3

@inproceedings{li21q_interspeech,
  author={Ruirui Li and Chelsea J.-T. Ju and Zeya Chen and Hongda Mao and Oguz Elibol and Andreas Stolcke},
  title={{Fusion of Embeddings Networks for Robust Combination of Text Dependent and Independent Speaker Recognition}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={4593--4597},
  doi={10.21437/Interspeech.2021-3}
}