One-vs-All Models for Asynchronous Training: An Empirical Analysis

Rahul Gupta, Aman Alok, Shankar Ananthakrishnan


Any given classification problem can be modeled using multiclass or One-vs-All (OVA) architecture. An OVA system consists of as many OVA models as the number of classes, providing the advantage of asynchrony, where each OVA model can be re-trained independent of other models. This is particularly advantageous in settings where scalable model training is a consideration (for instance in an industrial environment where multiple and frequent updates need to be made to the classification system). In this paper, we conduct empirical analysis on realizing independent updates to OVA models and its impact on the accuracy of the overall OVA system. Given that asynchronous updates lead to differences in training datasets for OVA models, we first define a metric to quantify the differences in datasets. Thereafter, using Natural Language Understanding as a task of interest, we estimate the impact of three factors: (i) number of classes, (ii) number of data points and, (iii) divergences in training datasets across OVA models; on the OVA system accuracy. Finally, we observe the accuracy impact of increased asynchrony in a Spoken Language Understanding system. We analyze the results and establish that the proposed metric correlates strongly with the model performances in both the experimental settings.


 DOI: 10.21437/Interspeech.2019-2760

Cite as: Gupta, R., Alok, A., Ananthakrishnan, S. (2019) One-vs-All Models for Asynchronous Training: An Empirical Analysis. Proc. Interspeech 2019, 794-798, DOI: 10.21437/Interspeech.2019-2760.


@inproceedings{Gupta2019,
  author={Rahul Gupta and Aman Alok and Shankar Ananthakrishnan},
  title={{One-vs-All Models for Asynchronous Training: An Empirical Analysis}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={794--798},
  doi={10.21437/Interspeech.2019-2760},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2760}
}