Crowdsourcing platforms depend on the quality of work provided by a distributed workforce. Yet, it is challenging to dependably measure the reliability of these workers, particularly in the face of strategic or malicious behavior. In this paper, we presets. a dynamic and efficient solution to keep tracking workers' reliability. In particular, we use both gold standard evaluation and peer consistency evaluation to measure each worker performance, and adjust the proportion of the two types of evaluation according to the estimated distribution of workers' behavior (e.g., being reliable or malicious). Through experiments over real Amazon Mechanical Turk traces, we find that our approach has a significant gain in terms of accuracy and cost compared to state-of-the-art algorithms.
CrowdEval: A Cost-Efficient Strategy to Evaluate Crowdsourced Worker's Reliability
Carminati, B
;
2018-01-01
Abstract
Crowdsourcing platforms depend on the quality of work provided by a distributed workforce. Yet, it is challenging to dependably measure the reliability of these workers, particularly in the face of strategic or malicious behavior. In this paper, we presets. a dynamic and efficient solution to keep tracking workers' reliability. In particular, we use both gold standard evaluation and peer consistency evaluation to measure each worker performance, and adjust the proportion of the two types of evaluation according to the estimated distribution of workers' behavior (e.g., being reliable or malicious). Through experiments over real Amazon Mechanical Turk traces, we find that our approach has a significant gain in terms of accuracy and cost compared to state-of-the-art algorithms.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.