Minimizing Annotation Effort for Adaptation of Speech-Activity Detection Systems

Luciana Ferrer, Martin Graciarena


Annotating audio data for the presence and location of speech is a time-consuming and therefore costly task. This is mostly because annotation precision greatly affects the performance of the speech-activity detection (SAD) systems trained with this data, which means that the annotation process must be careful and detailed. Although significant amounts of data are already annotated for speech presence and are available to train SAD systems, these systems are known to perform poorly on channels that are not well-represented by the training data. However obtaining representative audio samples from a new channel is relative easy and this data can be used for training a new SAD system or adapting one trained with larger amounts of mismatched data. This paper focuses on the problem of selecting the best-possible subset of available audio data given a budgeted time for annotation. We propose simple approaches for selection that lead to significant gains over naïve methods that merely select N full files at random. An approach that uses the frame-level scores from a baseline system to select regions such that the score distribution is uniformly sampled gives the best trade-off across a variety of channel groups.


DOI: 10.21437/Interspeech.2016-247

Cite as

Ferrer, L., Graciarena, M. (2016) Minimizing Annotation Effort for Adaptation of Speech-Activity Detection Systems. Proc. Interspeech 2016, 3002-3006.

Bibtex
@inproceedings{Ferrer+2016,
author={Luciana Ferrer and Martin Graciarena},
title={Minimizing Annotation Effort for Adaptation of Speech-Activity Detection Systems},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-247},
url={http://dx.doi.org/10.21437/Interspeech.2016-247},
pages={3002--3006}
}