Speaking involves sequences of linguistic units that can be produced under different sets of control strategies. For instance, a given phoneme can be achieved with different acoustic properties, and a sequence of phonemes can be performed at different speech rates and with different prosodies. How does the Central Nervous System select a specific control strategy among all the available ones? In a previously published article we proposed a Bayesian model that addressed this question with respect to the multiplicity of acoustic realizations of a sequence of phonemes. One of the strengths of Bayesian modeling is that it is well adapted to the combination of multiple constraints. In the present paper we illustrate this feature by defining an extension of our previous model that includes force constraints related to the level of effort for the production of phoneme sequences, as it could be the case in clear versus casual speech. The integration of this additional constraint is used to model the control of articulation clarity. Pertinence of the results is illustrated by controlling a biomechanical model of the vocal tract for speech production.

DOI: `10.21437/Interspeech.2016-441`

Cite as

Patri, J., Perrier, P., Diard, J. (2016) Bayesian Modeling in Speech Motor Control: A Principled Structure for the Integration of Various Constraints. Proc. Interspeech 2016, 3588-3592.

Bibtex

@inproceedings{Patri+2016, author={Jean-François Patri and Pascal Perrier and Julien Diard}, title={Bayesian Modeling in Speech Motor Control: A Principled Structure for the Integration of Various Constraints}, year=2016, booktitle={Interspeech 2016}, doi={10.21437/Interspeech.2016-441}, url={http://dx.doi.org/10.21437/Interspeech.2016-441}, pages={3588--3592} }