SAPA-SCALE Conference 2012
Portland, OR, USA
We address the problem of microphone location calibration where the sensor positions have a sparse spatial approximation on a discretized grid. We characterize the microphone signals as a sparse vector represented over a codebook of multi-channel signals where the support of the representation encodes the microphone locations. The codebook is constructed of multi-channel signals obtained by inverse filtering the acoustic channel and projecting the signals onto a array manifold matrix of the hypothesized geometries. This framework requires that the position of a speaker or the track of its movement to be known without any further assumption about the source signal. The sparse position encoding vector is approximated by model-based sparse recovery algorithm exploiting the block-dependency structure underlying the broadband speech spectrum. The experiments conducted on real data recordings demonstrate the effectiveness of the proposed approach and the importance of the joint sparsity models in multi-channel speech processing tasks.
Index Terms: Microphone array calibration, Structured Sparse coding, Model-based sparse recovery, Multi-party speech signals
Bibliographic reference. Asaei, Afsaneh / Raj, Bhiksha / Bourlard, Hervé / Cevher, Volkan (2012): "Structured sparse coding for microphone array location calibration", In SAPA-SCALE-2012, 74-79.