Towards Smart-Cars That Can Listen: Abnormal Acoustic Event Detection on the Road

Mahesh Kumar Nandwana, Taufiq Hasan


Even with the recent technological advancements in smart-cars, safety is still a major challenge in autonomous driving. State-of-the-art self-driving vehicles mostly rely on visual, ultrasonic and radar sensors to assess the surroundings and make decisions. However, in certain driving scenarios, the best modality for context awareness is environmental sound. In this study, we propose an acoustic event recognition framework for detecting abnormal audio events on the road. We consider five classes of audio events, namely, ambulance siren, railroad crossing bell, tire screech, car honk, and glass break. We explore various generative and discriminative back-end classifiers, utilizing Gaussian Mixture Models (GMM), GMM mean supervectors and the I-vector framework. Evaluation results using the proposed strategy validate the effectiveness of the proposed system.


DOI: 10.21437/Interspeech.2016-1366

Cite as

Nandwana, M.K., Hasan, T. (2016) Towards Smart-Cars That Can Listen: Abnormal Acoustic Event Detection on the Road. Proc. Interspeech 2016, 2968-2971.

Bibtex
@inproceedings{Nandwana+2016,
author={Mahesh Kumar Nandwana and Taufiq Hasan},
title={Towards Smart-Cars That Can Listen: Abnormal Acoustic Event Detection on the Road},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-1366},
url={http://dx.doi.org/10.21437/Interspeech.2016-1366},
pages={2968--2971}
}