Automatic detection of different oral-nasal configurations during speech is useful for understanding normal nasalization and assessing certain speech disorders. We propose an algorithm to extract nasalization features from dual-channel acoustic signals that are acquired by a simple two-microphone setup. The feature is based on a dual-channel acoustic model and the associated analysis method. We successfully test this feature in speaker-dependent and speaker-independent tasks by comparing it with the conventional single-channel MFCC feature. The proposed feature uniformly performs better in both tasks.
Bibliographic reference. Niu, Xiaochuan / Santen, Jan P. H. van (2007): "Dual-channel acoustic detection of nasalization states", In INTERSPEECH-2007, 1921-1924.