There is now considerable evidence that fine-grained acousticphonetic detail in the speech signal helps listeners to segment a speech signal into syllables and words. In this paper, we compare two computational models of word recognition on their ability to capture and use this fine-phonetic detail during speech recognition. One model, SpeM, is phoneme-based, whereas the other, newly developed Fine-Tracker, is based on articulatory features. Simulations dealt with modelling the ability of listeners to distinguish short words (e.g., 'ham') from the longer words in which they are embedded (e.g., 'hamster'). The simulations with Fine-Tracker showed that it was, like human listeners, able to distinguish between short words from the longer words in which they are embedded. This suggests that it is possible to extract this fine-phonetic detail from the speech signal and use it during word recognition.
Bibliographic reference. Scharenborg, Odette (2008): "Modelling fine-phonetic detail in a computational model of word recognition", In INTERSPEECH-2008, 1473-1476.