Listeners outperform ASR systems in every speech recognition task. However, what is not clear is where this human advantage originates. This paper investigates the role of acoustic feature representations. We test four (MFCCs, PLPs, Mel Filterbanks, Rate Maps) acoustic representations, with and without "pitch" information, using the same back-end. The results are compared with listener results at the level of articulatory feature classification. While no acoustic feature representation reached the levels of human performance, both MFCCs and Rate maps achieved good scores, with Rate maps nearing human performance on the classification of voicing. Comparing the results on the most difficult articulatory features to classify showed similarities between the humans and the SVMs: e.g., "dental" was by far the least well identified by both groups. Overall, adding pitch information seemed to hamper classification performance.
Cite as: Scharenborg, O., Cooke, M. (2008) Comparing human and machine recognition performance on a VCV corpus. Proc. ISCA ITRW on Speech Analysis and Processing for Knowledge Discovery, paper 003
@inproceedings{scharenborg08_spkd, author={Odette Scharenborg and Martin Cooke}, title={{Comparing human and machine recognition performance on a VCV corpus}}, year=2008, booktitle={Proc. ISCA ITRW on Speech Analysis and Processing for Knowledge Discovery}, pages={paper 003} }