Recognizing Multi-Talker Speech with Permutation Invariant Training

Dong Yu, Xuankai Chang, Yanmin Qian


In this paper, we propose a novel technique for direct recognition of multiple speech streams given the single channel of mixed speech, without first separating them. Our technique is based on permutation invariant training (PIT) for automatic speech recognition (ASR). In PIT-ASR, we compute the average cross entropy (CE) over all frames in the whole utterance for each possible output-target assignment, pick the one with the minimum CE, and optimize for that assignment. PIT-ASR forces all the frames of the same speaker to be aligned with the same output layer. This strategy elegantly solves the label permutation problem and speaker tracing problem in one shot. Our experiments on artificially mixed AMI data showed that the proposed approach is very promising.


 DOI: 10.21437/Interspeech.2017-305

Cite as: Yu, D., Chang, X., Qian, Y. (2017) Recognizing Multi-Talker Speech with Permutation Invariant Training. Proc. Interspeech 2017, 2456-2460, DOI: 10.21437/Interspeech.2017-305.


@inproceedings{Yu2017,
  author={Dong Yu and Xuankai Chang and Yanmin Qian},
  title={Recognizing Multi-Talker Speech with Permutation Invariant Training},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={2456--2460},
  doi={10.21437/Interspeech.2017-305},
  url={http://dx.doi.org/10.21437/Interspeech.2017-305}
}