ISCA Archive Interspeech 2017
ISCA Archive Interspeech 2017

Null-Hypothesis LLR: A Proposal for Forensic Automatic Speaker Recognition

Yosef A. Solewicz, Michael Jessen, David van der Vloed

A new method named Null-Hypothesis LLR (H0LLR) is proposed for forensic automatic speaker recognition. The method takes into account the fact that forensically realistic data are difficult to collect and that inter-individual variation is generally better represented than intra-individual variation. According to the proposal, intra-individual variation is modeled as a projection from case-customized inter-individual variation. Calibrated log Likelihood Ratios (LLR) that are calculated on the basis of the H0LLR method were tested on two corpora of forensically-founded telephone interception test sets, German-based GFS 2.0 and Dutch-based NFI-FRITS. Five automatic speaker recognition systems were tested based on the scores or the LLRs provided by these systems which form the input to H0LLR. Speaker-discrimination and calibration performance of H0LLR is comparable to the performance indices of the system-internal LLR calculation methods. This shows that external data and strategies that work with data outside the forensic domain and without case customization are not necessary. It is also shown that H0LLR leads to a reduction in the diversity of LLR output patterns of different automatic systems. This is important for the credibility of the Likelihood Ratio framework in forensics, and its application in forensic automatic speaker recognition in particular.


doi: 10.21437/Interspeech.2017-1023

Cite as: Solewicz, Y.A., Jessen, M., Vloed, D.v.d. (2017) Null-Hypothesis LLR: A Proposal for Forensic Automatic Speaker Recognition. Proc. Interspeech 2017, 2849-2853, doi: 10.21437/Interspeech.2017-1023

@inproceedings{solewicz17_interspeech,
  author={Yosef A. Solewicz and Michael Jessen and David van der Vloed},
  title={{Null-Hypothesis LLR: A Proposal for Forensic Automatic Speaker Recognition}},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={2849--2853},
  doi={10.21437/Interspeech.2017-1023}
}