INTERSPEECH 2004 - ICSLP
The introduction of Aurora 4 tasks provides a standard database and methodology for comparing the effectiveness of different robust algorithms on LVCSR. One important issue on Aurora 4 tasks is the computation time involved in evaluating different test conditions. In this paper we show that by employing HTK as the recognition frontend and backend on Aurora 4 tasks with the use of cepstral mean subtraction, 14% relative improvement is achieved on the baseline clean train tasks at a 82.5% time reduction in training time and 40% time reduction on decoding. Furthermore, we found that optimizing the model complexity can increase the recognition performance (in both computation time and accuracy). Accuracy can be further improved with the use of unsupervised MLLR adaptation on one or multiple sentences. The adaptation results show that most of the gain from adaptation comes from adapting to the environment instead of to the speaker. With the use of adaptation,the error rate is reduced from the baseline result of 69.6% to 40%.
Bibliographic reference. Yeung, Siu-Kei Au / Siu, Man-Hung (2004): "Improved performance of Aurora 4 using HTK and unsupervised MLLR adaptation", In INTERSPEECH-2004, 161-164.