Nasalization confounds the acoustic space by both crowding acoustic cues and also increasing the bandwidth of F1 which makes vowel identification, and hence classification by machine learning algorithms more complex. We present results from a set of machine learning (ML) algorithms that are trained on 15 acoustic features from the spectral and temporal domains to classify contextually nasalized vowels [1, 2]. The Degree of Articulatory Constraint (DAC) predicts that phonetic segments produced with lesser tongue-dorsum involvement are coarticulatorily more sensitive [3, 4]. We test the predictions of the DAC, using ML algorithms to classify vowels in the context of labials and dentals. Labials /m/ show more coarticulatory sensitivity as compared to dentals /n/. The ML algorithms do align themselves with the predictions of the DAC, showing 12% better accuracy for identification of vowels in the context of labials than in the context of dentals. Based on the DAC predictions, labials will exert less coarticulatory influence on neighboring vowels, and hence the classification of vowels next to labials should be easier, compared to dentals. The Nasal-Vowel (NV) and Vowel-Nasal (VN) contexts also allow us to test the effects of anticipatory and carryover nasal coarticulation on vowel classification.
Bibliographic reference. Dutta, Indranil / Pandey, Ayushi (2015): "Acoustics of articulatory constraints: vowel classification and nasalization", In INTERSPEECH-2015, 1700-1704.