Mutual information is commonly used in speech processing in the context of statistical mapping. Examples are the optimization of speech or speaker recognition algorithms, the computation of performance bounds on such algorithms, and bandwidth extension of narrow-band speech signals. It is generally ignored that speech-signal derived data usually have an intrinsic dimensionality that is lower than the dimensionality of the observation vectors (the dimensionality of the embedding space). In this paper, we show that such reduced dimensionality can affect the accuracy of the mutual information estimate significantly. We introduce a new method that removes the effects of singular probability density functions. The method does not require prior knowledge of the intrinsic dimensionality of the data. It is shown that the method is appropriate for speech-derived data.
Bibliographic reference. Nilsson, Mattias / Kleijn, W. Bastiaan (2007): "Mutual information and the speech signal", In INTERSPEECH-2007, 502-505.