11th Annual Conference of the International Speech Communication Association

Makuhari, Chiba, Japan
September 26-30. 2010

Low-Dimensional Space Transforms of Posteriors in Speech Recognition

Jan Zelinka, Jan Trmal, Luděk Müller

University of West Bohemia, Czech Republic

In this paper we present three novel posterior transforms with the primary goal to achieve a high reduction of a feature vector size. The presented methods transform the posteriors to 1,D or 2,D space. For such a high reduction ratio the usually applied methods fail to keep the discriminative information. Contrary, the presented methods were specifically designed to retain most of the discriminative information. In our experiments, we used several different combinations of feature extraction methods nowadays commonly used, i.e. the PLP features (augmented with delta and acceleration coefficients) and two kinds of MLP-ANN features: the bottleneck (BN) and posterior estimates (PE). The experiments were designed with special attention to the assessment of possible improvements of the performance when the PLP features are combined either with the BN features or with the PE features whose dimensionality was reduced using the proposed feature transforms. The performance of the designed transforms was tested on two different speech corpora: a telephone speech SpeechDat-East corpus and multi-modal Czech Audio-Visual corpus.

Full Paper

Bibliographic reference.  Zelinka, Jan / Trmal, Jan / Müller, Luděk (2010): "Low-dimensional space transforms of posteriors in speech recognition", In INTERSPEECH-2010, 1193-1196.