INTERSPEECH 2006 - ICSLP
Previous studies of human performance in deception detection have found that humans generally are quite poor at this task, comparing unfavorably even to the performance of automated procedures. However, different scenarios and speakers may be harder or easier to judge. In this paper we compare human to machine performance detecting deception on a single corpus, the Columbia-SRI-Colorado Corpus of deceptive speech. On average, our human judges scored worse than chance - and worse than current best machine learning performance on this corpus. However, not all judges scored poorly. Based on personality tests given before the task, we find that several personality factors appear to correlate with the ability of a judge to detect deception in speech.
Bibliographic reference. Enos, Frank / Benus, Stefan / Cautin, Robin L. / Graciarena, Martin / Hirschberg, Julia / Shriberg, Elizabeth (2006): "Personality factors in human deception detection: comparing human to machine performance", In INTERSPEECH-2006, paper 1664-Tue1A3O.6.