INTERSPEECH 2010
11th Annual Conference of the International Speech Communication Association

Makuhari, Chiba, Japan
September 26-30. 2010

Exploiting Context-Dependency and Acoustic Resolution of Universal Speech Attribute Models in Spoken Language Recognition

Sabato Marco Siniscalchi (1), Jeremy Reed (2), Torbjørn Svendsen (3), Chin-Hui Lee (2)

(1) Università Kore di Enna, Italy
(2) Georgia Institute of Technology, USA
(3) NTNU, Norway

This paper expands a previously proposed universal acoustic characterization approach to spoken language identification (LID) by studying different ways of modeling attributes to improve language recognition. The motivation is to describe any spoken language with a common set of fundamental units. Thus, a spoken utterance is first tokenized into a sequence of universal attributes. Then a vector space modeling approach delivers the final LID decision. Context-dependent attribute models are now used to better capture spectral and temporal characteristics. Also, an approach to expand the set of attributes to increase the acoustic resolution is studied. Our experiments show that the tokenization accuracy positively affects LID results by producing a 2.8% absolute improvement over our previous 30-second NIST 2003 performance. This result also compares favorably with the best results on the same task known by the authors when the tokenizers are trained on language-dependent OGI-TS data.

Full Paper

Bibliographic reference.  Siniscalchi, Sabato Marco / Reed, Jeremy / Svendsen, Torbjørn / Lee, Chin-Hui (2010): "Exploiting context-dependency and acoustic resolution of universal speech attribute models in spoken language recognition", In INTERSPEECH-2010, 2718-2721.