ISCA Workshop on
Statistical And Perceptual Audition

Makuhari, Japan
September 25, 2010

Machine Learning for Learning how the Brain Recognizes Speech and Language

Janet M. Baker (1), Alex M. Chan (2,3), Ksenija Marinkovic (4), Eric Halgren (4), Sydney Cash (2)

(1) Saras Institute, Newton, MA, USA
(2) Department of Neurology, Massachusetts General Hospital, Boston, MA, USA
(3) Harvard-MIT Division of Health, Science, and Technology, Medical Engineering and Medical Physics, Cambridge, MA, USA
(4) Department of Radiology, University of California, San Diego, La Jolla, CA, USA

Over the past several decades, automatic speech recognition has made great progress through the application of statistics and machine learning, combined with perceptual and structural knowledge about speech and language, as well as its variability. This paper reviews some recent work that applies some of these approaches to cortical processing of speech and language in the human brain to better understand how it functions. Specific experiments demonstrate feasibility for the discrimination of small sets of words (83% on 10 spoken words) and semantic categories (76% on 2 categories). This speech and language information is broadly distributed both spatially and temporally across the brain.

Index Terms: speech recognition, semantics, machine learning, brain, magnetoencephalography, electroencephalography, support vector machines

Full Paper

Bibliographic reference.  Baker, Janet M. / Chan, Alex M. / Marinkovic, Ksenija / Halgren, Eric / Cash, Sydney (2010): "Machine learning for learning how the brain recognizes speech and language", In SAPA-2010, 49-54.