Nowadays, using state of the art multivariate machine learning approaches, researchers are able to classify brain states from brain data. One of the applications of this technique is decoding phonemes that are being produced from brain data in order to decode produced words. However, this approach has been only moderately successful. Instead, decoding articulatory features from brain data may be more feasible. As a first step towards this approach, we propose a word decoding method that is based on the detection of articulatory features (words are identified from a sequence of articulatory class labels). In essence, we investigated how the lexical ambiguity is reduced as a function of confusion between articulatory features, and a function of the confusion between phonemes after feature decoding. We created a number of models based on different combinations of articulatory features and tested word identification on an English corpus with approximately 70,000 words. The most promising model used only 11 classes and identified 71% of words correctly. The results confirmed that it is possible to decode words based on articulatory features, and this offers the opportunity for multivariate fMRI speech decoding.
Bibliographic reference. Grootswagers, Tijl / Dijkstra, Karen / Bosch, Louis ten / Brandmeyer, Alex / Sadakata, Makiko (2013): "Word identification using phonetic features: towards a method to support multivariate fMRI speech decoding", In INTERSPEECH-2013, 3201-3205.