p such
as to achieve minimum Bayes error (or probability of
misclassification). Two avenues will be explored: the
first is to maximize the Θ-average divergence between
the class densities and the second is to minimize the
union Bhattacharyya bound in the range of Θ. While
both approaches yield similar performance in practice,
they outperform standard LDA features and show a
10% relative improvement in the word error rate over
state-of-the-art cepstral features on a large vocabulary
telephony speech recognition task.
Full Paper
Bibliographic reference.
Saon, George / Padmanabhan, Mukund (2000):
"Minimum Bayes error feature selection",
In ICSLP-2000, vol.3, 75-78.