Identifying Hearing Loss from Learned Speech Kernels

Shamima Najnin, Bonny Banerjee, Lisa Lucks Mendel, Masoumeh Heidari Kapourchali, Jayanta Kumar Dutta, Sungmin Lee, Chhayakanta Patro, Monique Pousson

Does a hearing-impaired individual’s speech reflect his hearing loss? To investigate this question, we recorded at least four hours of speech data from each of 29 adult individuals, both male and female, belonging to four classes: 3 normal, and 26 severely-to-profoundly hearing impaired with high, medium or low speech intelligibility. Acoustic kernels were learned for each individual by capturing the distribution of his speech data points represented as 20 ms duration windows. These kernels were evaluated using a set of neurophysiological metrics, namely, distribution of characteristic frequencies, equal loudness contour, bandwidth and Q10 value of tuning curve. It turns out that, for our cohort, a feature vector can be constructed out of four properties of these metrics that would accurately classify hearing-impaired individuals with low intelligible speech from normal ones using a linear classifier. However, the overlap in the feature space between normal and hearing-impaired individuals increases as the speech becomes more intelligible. We conclude that a hearing-impaired individual’s speech does reflect his hearing loss provided his loss of hearing has considerably affected the intelligibility of his speech.

DOI: 10.21437/Interspeech.2016-1488

Cite as

Najnin, S., Banerjee, B., Mendel, L.L., Kapourchali, M.H., Dutta, J.K., Lee, S., Patro, C., Pousson, M. (2016) Identifying Hearing Loss from Learned Speech Kernels. Proc. Interspeech 2016, 243-247.

author={Shamima Najnin and Bonny Banerjee and Lisa Lucks Mendel and Masoumeh Heidari Kapourchali and Jayanta Kumar Dutta and Sungmin Lee and Chhayakanta Patro and Monique Pousson},
title={Identifying Hearing Loss from Learned Speech Kernels},
booktitle={Interspeech 2016},