An essential step in understanding the processes underlying the general mechanism of perceptual categorization is to identify which portions of a physical stimulation modulate the responses of our perceptual system. More specifically, in the context of speech comprehension, it is still unclear what information is used to categorize a speech stimulus as one phoneme or another. Here we propose to adapt a Generalized Linear Model with smooth priors, already used in the visual domain for estimation of so-called classification images, to auditory experiments. We show how this promising approach can be applied to the identification of fine functional acoustic cues in speech perception.
Bibliographic reference. Varnet, Léo / Knoblauch, Kenneth / Meunier, Fanny / Hoen, Michel (2013): "Show me what you listen to! auditory classification images can reveal the processing of fine acoustic cues during speech categorization", In INTERSPEECH-2013, 3167-3171.