Based on the concept of entropy, a new approach to analyse the quality of features as used in speech recognition is proposed. We regard the relation between the hidden Markov model (HMM) states and the corresponding frame based feature vectors as a coding problem, where the states are sent through a noisy recognition channel and received as feature vectors. Using the relation between Shannon’s conditional entropy and the error rate on state level, we estimate how much information is contained in the feature vectors to recognize the states. Thus, the conditional entropy is a measure for the quality of the features. Finally, we show how noise reduces the information contained in the features.
Bibliographic reference. Setiawan, Panji / Höge, Harald / Fingscheidt, Tim (2009): "Entropy-based feature analysis for speech recognition", In INTERSPEECH-2009, 2959-2962.