Using Multi-Resolution Feature Maps with Convolutional Neural Networks for Anti-Spoofing in ASV

Qiongqiong Wang, Kong Aik Lee, Takafumi Koshinaka


This paper presents a simple but effective method that uses multi-resolution feature maps with convolutional neural networks (CNNs) for anti-spoofing in automatic speaker verification (ASV). The central idea is to alleviate the problem that the feature maps commonly used in anti-spoofing networks are insufficient for building discriminative representations of audio segments, as they are often extracted by a single-length sliding window. Resulting trade-offs between time and frequency resolutions restrict the information in single spectrograms. The proposed method improves both frequency resolution and time resolution by stacking multiple spectrograms that are extracted using different window lengths. These are fed into a convolutional neural network in the form of multiple channels, making it possible to extract more information from input signals while only marginally increasing computational costs. The efficiency of the proposed method has been conformed on the ASVspoof 2019 database. We show that the use of the proposed multiresolution inputs consistently outperforms that of score fusion across different CNN architectures. Moreover, computational cost remains small.


 DOI: 10.21437/Odyssey.2020-20

Cite as: Wang, Q., Lee, K.A., Koshinaka, T. (2020) Using Multi-Resolution Feature Maps with Convolutional Neural Networks for Anti-Spoofing in ASV. Proc. Odyssey 2020 The Speaker and Language Recognition Workshop, 138-142, DOI: 10.21437/Odyssey.2020-20.


@inproceedings{Wang2020,
  author={Qiongqiong Wang and Kong Aik Lee and Takafumi Koshinaka},
  title={{Using Multi-Resolution Feature Maps with Convolutional Neural Networks for Anti-Spoofing in ASV}},
  year=2020,
  booktitle={Proc. Odyssey 2020 The Speaker and Language Recognition Workshop},
  pages={138--142},
  doi={10.21437/Odyssey.2020-20},
  url={http://dx.doi.org/10.21437/Odyssey.2020-20}
}