Acoustic Model Optimization Based on Evolutionary Stochastic Gradient Descent with Anchors for Automatic Speech Recognition

Xiaodong Cui, Michael Picheny


Evolutionary stochastic gradient descent (ESGD) was proposed as a population-based approach that combines the merits of gradient-aware and gradient-free optimization algorithms for superior overall optimization performance. In this paper we investigate a variant of ESGD for optimization of acoustic models for automatic speech recognition (ASR). In this variant, we assume the existence of a well-trained acoustic model and use it as an anchor in the parent population whose good “gene” will prorogate in the evolution to the offsprings. We propose an ESGD algorithm leveraging the anchor models such that it guarantees the best fitness of the population will never degrade from the anchor model. Experiments on 50-hour Broadcast News (BN50) and 300-hour Switchboard (SWB300) show that the ESGD with anchors can further improve the loss and ASR performance over the existing well-trained acoustic models.


 DOI: 10.21437/Interspeech.2019-2620

Cite as: Cui, X., Picheny, M. (2019) Acoustic Model Optimization Based on Evolutionary Stochastic Gradient Descent with Anchors for Automatic Speech Recognition. Proc. Interspeech 2019, 1581-1585, DOI: 10.21437/Interspeech.2019-2620.


@inproceedings{Cui2019,
  author={Xiaodong Cui and Michael Picheny},
  title={{Acoustic Model Optimization Based on Evolutionary Stochastic Gradient Descent with Anchors for Automatic Speech Recognition}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={1581--1585},
  doi={10.21437/Interspeech.2019-2620},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2620}
}