We tackle the task of localizing speech signals on the horizontal plane using monaural cues. We show that monaural cues as incorporated in speech are efficiently captured by amplitude modulation spectra patterns. We demonstrate that by using these patterns, a linear Support Vector Machine can use directionality-related information to learn to discriminate and classify sound location at high resolution. We propose a straightforward and robust way of integrating information from two ears. Each ear is treated as an independent processor and information is integrated at the decision level thus resolving, to a large extent, ambiguity in location.
Bibliographic reference. Kliper, Roi / Kayser, Hendrik / Weinshall, Daphna / Nelken, Israel / Anemüller, Jörn (2011): "Monaural azimuth localization using spectral dynamics of speech", In INTERSPEECH-2011, 33-36.