ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

Modeling Sensorimotor Adaptation in Speech Through Alterations to Forward and Inverse Models

Taijing Chen, Adam Lammert, Benjamin Parrell

When speakers are exposed to auditory feedback perturbations of a particular vowel, they not only adapt their productions of that vowel but also transfer this change to other, untrained, vowels. However, current models of speech sensorimotor adaptation, which rely on changes in the feedforward control of specific speech units, are unable to account for this type of generalization. Here, we developed a neural-network based model to simulate speech sensorimotor adaptation, and assess whether updates to internal control models can account for observed patterns of generalization. Based on a dataset generated from the Maeda plant, we trained two independent neural networks: 1) an inverse model, which generates motor commands for desired acoustic outcomes and 2) a forward model, which maps motor commands to acoustic outcomes (prediction). When vowel formant perturbations were given, both forward and inverse models were updated when there was a mismatch between predicted and perceived output. Our results replicate behavioral experiments: the model altered its production to counteract the perturbation, and showed gradient transfer of this learning dependent on acoustic distance between training and test vowels. These results suggest that updating paired forward and inverse models provides a plausible account for sensorimotor adaptation in speech.


doi: 10.21437/Interspeech.2021-1746

Cite as: Chen, T., Lammert, A., Parrell, B. (2021) Modeling Sensorimotor Adaptation in Speech Through Alterations to Forward and Inverse Models. Proc. Interspeech 2021, 3201-3205, doi: 10.21437/Interspeech.2021-1746

@inproceedings{chen21m_interspeech,
  author={Taijing Chen and Adam Lammert and Benjamin Parrell},
  title={{Modeling Sensorimotor Adaptation in Speech Through Alterations to Forward and Inverse Models}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={3201--3205},
  doi={10.21437/Interspeech.2021-1746}
}