Unsupervised Learning of Acoustic Units Using Autoencoders and Kohonen Nets

Vikramjit Mitra, Dimitra Vergyri, Horacio Franco

Often, prior knowledge of subword units is unavailable for low-resource languages. Instead, a global subword unit description, such as a universal phone set, is typically used in such scenarios. One major bottleneck for existing speech-processing systems is their reliance on transcriptions. Unfortunately, the preponderance of data becoming available everyday is only worsening the problem, as properly transcribing, and hence making this data useful for training speech-processing models, is impossible. This work investigates learning acoustic units in an unsupervised manner from real-world speech data by using a cascade of an autoencoder and a Kohonen net. For this purpose, a deep autoencoder with a bottleneck layer at the center was trained with multiple languages. Once trained, the bottleneck-layer output was used to train a Kohonen net, such that state-level ids can be assigned to the bottleneck outputs. To ascertain how consistent such state-level ids are with respect to the acoustic units, phone-alignment information was used for a part of the data to qualify if indeed a functional relationship existed between the phone ids and the Kohonen state ids and, if yes, whether such relationship can be generalized to data that are not transcribed.

DOI: 10.21437/Interspeech.2016-1374

Cite as

Mitra, V., Vergyri, D., Franco, H. (2016) Unsupervised Learning of Acoustic Units Using Autoencoders and Kohonen Nets. Proc. Interspeech 2016, 1300-1304.

author={Vikramjit Mitra and Dimitra Vergyri and Horacio Franco},
title={Unsupervised Learning of Acoustic Units Using Autoencoders and Kohonen Nets},
booktitle={Interspeech 2016},