Dialect classification is used in a variety of applications, such as machine translation and speech recognition, to improve the overall performance of the system. In a real-world scenario, a deployed dialect classification model can encounter anomalous inputs that differ from the training data distribution, also called out-of-distribution (OOD) samples. Those OOD samples can lead to unexpected outputs, as dialects of those samples are unseen during model training. Out-of-distribution detection is a new research area that has received little attention in the context of dialect classification. Towards this, we proposed a simple yet effective unsupervised Mahalanobis distance feature-based method to detect out-of-distribution samples. We utilize the latent embeddings from all intermediate layers of a wav2vec 2.0 transformer-based dialect classifier model for multi-task learning. Our proposed approach outperforms other state-of-the-art OOD detection methods significantly.
Cite as: Das, S.D., Vadi, Y., Unnam, A., Yadav, K. (2023) Unsupervised Out-of-Distribution Dialect Detection with Mahalanobis Distance. Proc. INTERSPEECH 2023, 1978-1982, doi: 10.21437/Interspeech.2023-1974
@inproceedings{das23_interspeech, author={Sourya Dipta Das and Yash Vadi and Abhishek Unnam and Kuldeep Yadav}, title={{Unsupervised Out-of-Distribution Dialect Detection with Mahalanobis Distance}}, year=2023, booktitle={Proc. INTERSPEECH 2023}, pages={1978--1982}, doi={10.21437/Interspeech.2023-1974} }