Interspeech'2005 - Eurospeech

Lisbon, Portugal
September 4-8, 2005

Speaker Verification via Articulatory Feature-Based Conditional Pronunciation Modeling with Vowel and Consonant Mixture Models

Ka-Yee Leung (1), Man-Wai Mak (1), Manhung Siu (2), Sun-Yuan Kung (3)

(1) Hong Kong Polytechnic University, China; (2) Hong Kong University of Science & Technology, China; (3) Princeton University, USA

Articulatory feature-based conditional pronunciation modeling (AFCPM) aims to capture the pronunciation characteristics of speakers by modeling the linkage between the states of articulation during speech production and the actual phones produced by a speaker. Previous AFCPM systems use one discrete density function for each phoneme to model the pronunciation characteristics of speakers. This paper proposes using a mixture of discrete density functions for AFCPM. In particular, the pronunciation characteristics of each phoneme is modeled by two density functions: one responsible for describing the articulatory features that are more relevant to vowels and the other for consonants. Verification scores are the weighted sum of the outputs of the two models. To enhance the resolution of the pronunciation models, four articulatory properties (front-back, lip-rounding, place of articulation, and manner of articulation) are used for pronunciation modeling. The proposed AFCPM is applied to a speaker verification task. Results show that using four articulatory features achieves a lower error rate as compared to using two features (manner and place of articulation) only. It was also found that dividing the articulatory properties into two groups is an effective means of solving the data-sparseness problem encountered in the training phase of AFCPM systems.

Full Paper

Bibliographic reference.  Leung, Ka-Yee / Mak, Man-Wai / Siu, Manhung / Kung, Sun-Yuan (2005): "Speaker verification via articulatory feature-based conditional pronunciation modeling with vowel and consonant mixture models", In INTERSPEECH-2005, 3089-3092.