ISCA Archive Interspeech 2016
ISCA Archive Interspeech 2016

Glimpse-Based Metrics for Predicting Speech Intelligibility in Additive Noise Conditions

Yan Tang, Martin Cooke

The glimpsing model of speech perception in noise operates by recognising those speech-dominant spectro-temporal regions, or glimpses, that survive energetic masking; hence, a speech recognition component is an integral part of the model. The current study evaluates whether a simpler family of metrics based solely on quantifying the amount of supra-threshold target speech available after energetic masking can account for subjective intelligibility. The predictive power of glimpse-based metrics is compared for natural, processed and synthetic speech in the presence of stationary and fluctuating maskers. These metrics are raw glimpse proportion, extended glimpse proportion, and two further refinements: one, FMGP, incorporates a component simulating the effect of forward masking; the other, HEGP, selects speech-dominant spectro-temporal regions with above-average energy on the noisy speech. The metrics are compared alongside a state-of-the-art non-glimpsing metric, using three large datasets of listener scores. Both FMGP and HEGP equal or improve upon the predictive power of the raw and extended metrics, with across-masker correlations ranging from 0.81–0.92; both metrics equal or exceed the state-of-the-art metric in all conditions. These outcomes suggests that easily-computed measures of unmasked, supra-threshold speech can serve as robust proxies for intelligibility across a range of speech styles and additive masking conditions.

doi: 10.21437/Interspeech.2016-14

Cite as: Tang, Y., Cooke, M. (2016) Glimpse-Based Metrics for Predicting Speech Intelligibility in Additive Noise Conditions. Proc. Interspeech 2016, 2488-2492, doi: 10.21437/Interspeech.2016-14

  author={Yan Tang and Martin Cooke},
  title={{Glimpse-Based Metrics for Predicting Speech Intelligibility in Additive Noise Conditions}},
  booktitle={Proc. Interspeech 2016},