An Expectation Maximization Approach to Joint Modeling of Multidimensional Ratings Derived from Multiple Annotators

Anil Ramakrishna, Rahul Gupta, Ruth B. Grossman, Shrikanth S. Narayanan


Ratings from multiple human annotators are often pooled in applications where the ground truth is hidden. Examples include annotating perceived emotions and assessing quality metrics for speech and image. These ratings are not restricted to a single dimension and can be multidimensional. In this paper, we propose an Expectation-Maximization based algorithm to model such ratings. Our model assumes that there exists a latent multidimensional ground truth that can be determined from the observation features and that the ratings provided by the annotators are noisy versions of the ground truth. We test our model on a study conducted on children with autism to predict a four dimensional rating of expressivity, naturalness, pronunciation goodness and engagement. Our goal in this application is to reliably predict the individual annotator ratings which can be used to address issues of cognitive load on the annotators as well as the rating cost. We initially train a baseline directly predicting annotator ratings from the features and compare it to our model under three different settings assuming: (i) each entry in the multidimensional rating is independent of others, (ii) a joint distribution among rating dimensions exists, (iii) a partial set of ratings to predict the remaining entries is available.


DOI: 10.21437/Interspeech.2016-270

Cite as

Ramakrishna, A., Gupta, R., Grossman, R.B., Narayanan, S.S. (2016) An Expectation Maximization Approach to Joint Modeling of Multidimensional Ratings Derived from Multiple Annotators. Proc. Interspeech 2016, 1555-1559.

Bibtex
@inproceedings{Ramakrishna+2016,
author={Anil Ramakrishna and Rahul Gupta and Ruth B. Grossman and Shrikanth S. Narayanan},
title={An Expectation Maximization Approach to Joint Modeling of Multidimensional Ratings Derived from Multiple Annotators},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-270},
url={http://dx.doi.org/10.21437/Interspeech.2016-270},
pages={1555--1559}
}