Sixth International Conference on Spoken Language Processing
(ICSLP 2000)

Beijing, China
October 16-20, 2000

An Equivalent-Class Based MMI Learning Method for MGCPM

Chunhua Luo, Fang Zheng, Mingxing Xu

Center of Speech Technology, State Key Laboratory of Intelligent Technology and Systems, Department of Computer Science & Technology, Tsinghua University, Beijing, China

In this paper, we present an Equivalent-Class Based Maximum Mutual Information (ECB-MMI) learning method for our previously proposed Mixed Gaussian Continuous Probability Model (MGCPM). Similar to HMMs, the defined object function for MGCPM training considers the mutual information among different models so as to maximally separate the Speech Recognition Units (SRUs) in model space. Experimental result shows that for MGCPM the MMI training method can improve the recognition rate by 5% compared to the traditional training method MLE (Maximum Likelihood Estimation). Because the computation amount of MMI algorithm is very large, we propose an N-Best strategy to find the corresponding equivalent class (EC) in order to reduce complexity. Our experimental result shows that this criterion works very well.

Keywords: Equivalent Class Based-MMI, Mixed Gaussian Continuous Probability Model, Speech Recognition Unit, MLE, and Equivalent Class


Full Paper

Bibliographic reference.  Luo, Chunhua / Zheng, Fang / Xu, Mingxing (2000): "An equivalent-class based MMI learning method for MGCPM", In ICSLP-2000, vol.4, 141-144.