Completely Unsupervised Phoneme Recognition by Adversarially Learning Mapping Relationships from Audio Embeddings

Da-Rong Liu, Kuan-Yu Chen, Hung-yi Lee, Lin-shan Lee


Unsupervised discovery of acoustic tokens from audio corpora without annotation and learning vector representations for these tokens have been widely studied. Although these techniques have been shown successful in some applications such as query-by-example Spoken Term Detection (STD), the lack of mapping relationships between these discovered tokens and real phonemes has limited the down-stream applications. This paper represents probably the first attempt towards the goal of completely unsupervised phoneme recognition, or mapping audio signals to phoneme sequences without phoneme-labeled audio data. The basic idea is to cluster the embedded acoustic tokens and learn the mapping between the cluster sequences and the unknown phoneme sequences with a Generative Adversarial Network (GAN). An unsupervised phoneme recognition accuracy of 36% was achieved in the preliminary experiments.


 DOI: 10.21437/Interspeech.2018-1800

Cite as: Liu, D., Chen, K., Lee, H., Lee, L. (2018) Completely Unsupervised Phoneme Recognition by Adversarially Learning Mapping Relationships from Audio Embeddings. Proc. Interspeech 2018, 3748-3752, DOI: 10.21437/Interspeech.2018-1800.


@inproceedings{Liu2018,
  author={Da-Rong Liu and Kuan-Yu Chen and Hung-yi Lee and Lin-shan Lee},
  title={Completely Unsupervised Phoneme Recognition by Adversarially Learning Mapping Relationships from Audio Embeddings},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={3748--3752},
  doi={10.21437/Interspeech.2018-1800},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1800}
}