Standard speaker adaptation algorithms perform poorly on dysarthric speech because of the limited phonemic repertoire of dysarthric speakers. In a previous paper, we proposed the use of "metamodels" to correct dysarthric speech. Here, we report on an improved technique that makes use of a cascade of Weighted Finite-State Transducers (WFSTs) at the confusion-matrix, word and language levels. This approach outperforms both standard MLLR and metamodels.
Bibliographic reference. Morales, Omar Caballero / Cox, Stephen (2008): "Application of weighted finite-state transducers to improve recognition accuracy for dysarthric speech", In INTERSPEECH-2008, 1761-1764.