A Deep Neural Network for Short-Segment Speaker Recognition

Amirhossein Hajavi, Ali Etemad


Today’s interactive devices such as smart-phone assistants and smart speakers often deal with short-duration speech segments. As a result, speaker recognition systems integrated into such devices will be much better suited with models capable of performing the recognition task with short-duration utterances. In this paper, a new deep neural network, UtterIdNet, capable of performing speaker recognition with short speech segments is proposed. Our proposed model utilizes a novel architecture that makes it suitable for short-segment speaker recognition through an efficiently increased use of information in short speech segments. UtterIdNet has been trained and tested on the VoxCeleb datasets, the latest benchmarks in speaker recognition. Evaluations for different segment durations show consistent and stable performance for short segments, with significant improvement over the previous models for segments of 2 seconds, 1 second, and especially sub-second durations (250 ms and 500 ms).


 DOI: 10.21437/Interspeech.2019-2240

Cite as: Hajavi, A., Etemad, A. (2019) A Deep Neural Network for Short-Segment Speaker Recognition. Proc. Interspeech 2019, 2878-2882, DOI: 10.21437/Interspeech.2019-2240.


@inproceedings{Hajavi2019,
  author={Amirhossein Hajavi and Ali Etemad},
  title={{A Deep Neural Network for Short-Segment Speaker Recognition}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={2878--2882},
  doi={10.21437/Interspeech.2019-2240},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2240}
}