Humans possess a remarkable ability to attend to a single speaker's voice in a multi-talker background. How the auditory system manages to extract intelligible speech under such acoustically complex and adverse listening conditions is not known, and indeed, it is not clear how attended speech is internally represented. Here, using multi-electrode surface recordings from the cortex of subjects engaged in a listening task with two simultaneous speakers, we demonstrate that population responses in non-primary human auditory cortex faithfully encode critical features of attended speech: speech spectrograms reconstructed based on cortical responses to the mixture of speakers reveal salient spectral and temporal features of the attended speaker, as if listening to that speaker alone. A simple classifier trained solely on examples of single speakers can decode both attended words and speaker identity. We find that task performance is well predicted by a rapid increase in attention-modulated neural selectivity across both local single- electrode and population-level cortical responses. These findings demonstrate that the cortical representation of speech does not merely reflect the external acoustic environment, but instead gives rise to the perceptual aspects relevant for the listener's intended goal.
Index Terms: speech synthesis, unit selection, joint costs
Bibliographic reference. Mesgarani, Nima / Chang, Edward (2012): "Speech and speaker separation in human auditory cortex", In INTERSPEECH-2012, 1480-1483.