13th Annual Conference of the International Speech Communication Association

Portland, OR, USA
September 9-13, 2012

Exploring Joint Equalization of Spatial-Temporal Contextual Statistics of Speech Features for Robust Speech Recognition

Hsin-Ju Hsieh (1,2), Jeih-weih Hung (2), Berlin Chen (1)

(1) National Taiwan Normal University, Taipei, Taiwan; (2) National Chi Nan University, Taiwan

Histogram equalization (HEQ) of speech features has recently become an active focus of much research in the field of robust speech recognition due to its inherent neat formulation and remarkable performance. Our work in this paper continues this general line of research in two significant aspects. First, a novel framework for joint equalization of spatial-temporal contextual statistics of speech features is proposed. For this idea to work, we leverage simple differencing and averaging operations to render the contextual relationships of feature vector components, not only between different dimensions but also between consecutive speech frames, for speech feature normalization. Second, we exploit a polynomial-fitting scheme to efficiently approximate the inverse of the cumulative density function of training speech, so as to work in conjunction with the presented normalization framework. As such, it provides the advantages of lower storage and time consumption when compared with the conventional HEQ methods. All experiments were carried out on the Aurora-2 database and task. The performance of the methods deduced from our proposed framework was thoroughly tested and verified by comparisons with other popular robustness methods, which suggests the utility of our methods.

Index Terms: noise robustness, histogram equalization, feature contextual statistics

Full Paper

Bibliographic reference.  Hsieh, Hsin-Ju / Hung, Jeih-weih / Chen, Berlin (2012): "Exploring joint equalization of spatial-temporal contextual statistics of speech features for robust speech recognition", In INTERSPEECH-2012, 2622-2625.