Efficient Implementation of the Room Simulator for Training Deep Neural Network Acoustic Models

Chanwoo Kim, Ehsan Variani, Arun Narayanan, Michiel Bacchiani


In this paper, we describe how to efficiently implement an acoustic room simulator to generate large-scale simulated data for training deep neural networks. Even though Google Room Simulator in [1] was shown to be quite effective in reducing the Word Error Rates (WERs) for far-field applications by generating simulated far-field training sets, it requires a very large number of FFTs. Room Simulator used approximately 80% of CPU usage in our CPU/GPU training architecture [2]. In this work, we implement an efficient OverLap Addition (OLA) based filtering using the open-source FFTW3 library. Further, we investigate the effects of the Room Impulse Response (RIR) lengths. Experimentally, we conclude that we can cut the tail portions of RIRs whose power is less than 20 dB below the maximum power without sacrificing the speech recognition accuracy. However, we observe that cutting RIR tail more than this threshold harms the speech recognition accuracy for rerecorded test sets. Using these approaches, we were able to reduce CPU usage for the room simulator portion down to 9.69% in CPU/GPU training architecture. Profiling result shows that we obtain 22.4 times speed-up on a single machine and 37.3 times speed up on Google’s distributed training infrastructure.


 DOI: 10.21437/Interspeech.2018-2566

Cite as: Kim, C., Variani, E., Narayanan, A., Bacchiani, M. (2018) Efficient Implementation of the Room Simulator for Training Deep Neural Network Acoustic Models. Proc. Interspeech 2018, 3028-3032, DOI: 10.21437/Interspeech.2018-2566.


@inproceedings{Kim2018,
  author={Chanwoo Kim and Ehsan Variani and Arun Narayanan and Michiel Bacchiani},
  title={Efficient Implementation of the Room Simulator for Training Deep Neural Network Acoustic Models},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={3028--3032},
  doi={10.21437/Interspeech.2018-2566},
  url={http://dx.doi.org/10.21437/Interspeech.2018-2566}
}