14thAnnual Conference of the International Speech Communication Association

Lyon, France
August 25-29, 2013

Robust Audio-Codebooks for Large-Scale Event Detection in Consumer Videos

Shourabh Rawat, Peter F. Schulam, Susanne Burger, Duo Ding, Yipei Wang, Florian Metze

Carnegie Mellon University, USA

In this paper we present our audio based system for detecting "events" within consumer videos (e.g. You Tube) and report our experiments on the TRECVID Multimedia Event Detection (MED) task and development data. Codebook or bag-of-words models have been widely used in text, visual and audio domains and form the state-of-the-art in MED tasks. The overall effectiveness of these models on such datasets depends critically on the choice of lowlevel features, clustering approach, sampling method, codebook size, weighting schemes and choice of classifier. In this work we empirically evaluate several approaches to model expressive and robust audio codebooks for the task of MED while ensuring compactness. First, we introduce the Large Scale Pooling Features (LSPF) and Stacked Cepstral Features for encoding local temporal information in audio codebooks. Second, we discuss several design decisions for generating and representing expressive audio codebooks and show how they scale to large datasets. Third, we apply text based techniques like Latent Dirichlet Allocation (LDA) to learn acoustic-topics as a means of providing compact representation while maintaining performance. By aggregating these decisions into our model, we obtained 11% relative improvement over our baseline audio systems.

Full Paper

Bibliographic reference.  Rawat, Shourabh / Schulam, Peter F. / Burger, Susanne / Ding, Duo / Wang, Yipei / Metze, Florian (2013): "Robust audio-codebooks for large-scale event detection in consumer videos", In INTERSPEECH-2013, 2929-2933.