Audio Content Based Geotagging in Multimedia

Anurag Kumar, Benjamin Elizalde, Bhiksha Raj


In this paper we propose methods to extract geographically relevant information in a multimedia recording using its audio content. Our method primarily is based on the fact that urban acoustic environment consists of a variety of sounds. Hence, location information can be inferred from the composition of sound events/classes present in the audio. More specifically, we adopt matrix factorization techniques to obtain semantic content of recording in terms of different sound classes. We use semi-NMF to for to do audio semantic content analysis using MFCCs. These semantic information are then combined to identify the location of recording. We show that these semantic content based geotagging can perform significantly better than state of art methods.


 DOI: 10.21437/Interspeech.2017-40

Cite as: Kumar, A., Elizalde, B., Raj, B. (2017) Audio Content Based Geotagging in Multimedia. Proc. Interspeech 2017, 1874-1878, DOI: 10.21437/Interspeech.2017-40.


@inproceedings{Kumar2017,
  author={Anurag Kumar and Benjamin Elizalde and Bhiksha Raj},
  title={Audio Content Based Geotagging in Multimedia},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={1874--1878},
  doi={10.21437/Interspeech.2017-40},
  url={http://dx.doi.org/10.21437/Interspeech.2017-40}
}