We present a Monte Carlo model to simulate human judgments in machine translation evaluation campaigns, such as WMT or IWSLT. We use the model to compare different ranking methods and to give guidance on the number of judgments that need to be collected to obtain sufficiently significant distinctions between systems.
Cite as: Koehn, P. (2012) Simulating human judgment in machine translation evaluation campaigns. Proc. International Workshop on Spoken Language Translation (IWSLT 2012), 179-184
@inproceedings{koehn12_iwslt, author={Philipp Koehn}, title={{Simulating human judgment in machine translation evaluation campaigns}}, year=2012, booktitle={Proc. International Workshop on Spoken Language Translation (IWSLT 2012)}, pages={179--184} }