Minimization of Regression and Ranking Losses with Shallow Neural Networks on Automatic Sincerity Evaluation

Hung-Shin Lee, Yu Tsao, Chi-Chun Lee, Hsin-Min Wang, Wei-Cheng Lin, Wei-Chen Chen, Shan-Wen Hsiao, Shyh-Kang Jeng


To estimate the degree of sincerity conveyed by a speech utterance and received by listeners, we propose an instance-based learning framework with shallow neural networks. The framework plays as not only a regressor that intends to fit the predicted value to the actual value but also a ranker that preserves the relative target magnitude between each pair of utterances, in an attempt to derive a higher Spearman’s rank correlation coefficient. In addition to describing how to simultaneously minimize regression and ranking losses, the issue of how utterance pairs work in the training and evaluation phases is also addressed by two kinds of realizations. The intuitive one is related to random sampling while the other seeks for representative utterances, named anchors, to form non-stochastic pairs. Our system outperforms the baseline by more than 25% relative improvement in the development set.


DOI: 10.21437/Interspeech.2016-756

Cite as

Lee, H., Tsao, Y., Lee, C., Wang, H., Lin, W., Chen, W., Hsiao, S., Jeng, S. (2016) Minimization of Regression and Ranking Losses with Shallow Neural Networks on Automatic Sincerity Evaluation. Proc. Interspeech 2016, 2031-2035.

Bibtex
@inproceedings{Lee+2016,
author={Hung-Shin Lee and Yu Tsao and Chi-Chun Lee and Hsin-Min Wang and Wei-Cheng Lin and Wei-Chen Chen and Shan-Wen Hsiao and Shyh-Kang Jeng},
title={Minimization of Regression and Ranking Losses with Shallow Neural Networks on Automatic Sincerity Evaluation},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-756},
url={http://dx.doi.org/10.21437/Interspeech.2016-756},
pages={2031--2035}
}