Title of article
Using crowdsourcing for TREC relevance assessment
Author/Authors
Omar Alonso، نويسنده , , Stefano Mizzaro، نويسنده ,
Issue Information
دوماهنامه با شماره پیاپی سال 2012
Pages
14
From page
1053
To page
1066
Abstract
Crowdsourcing has recently gained a lot of attention as a tool for conducting different kinds of relevance evaluations. At a very high level, crowdsourcing describes outsourcing of tasks to a large group of people instead of assigning such tasks to an in-house employee. This crowdsourcing approach makes possible to conduct information retrieval experiments extremely fast, with good results at a low cost.
This paper reports on the first attempts to combine crowdsourcing and TREC: our aim is to validate the use of crowdsourcing for relevance assessment. To this aim, we use the Amazon Mechanical Turk crowdsourcing platform to run experiments on TREC data, evaluate the outcomes, and discuss the results. We make emphasis on the experiment design, execution, and quality control to gather useful results, with particular attention to the issue of agreement among assessors. Our position, supported by the experimental results, is that crowdsourcing is a cheap, quick, and reliable alternative for relevance assessment.
Keywords
IR evaluation , Test collections , Crowdsourcing , TREC , Amazon mechanical turk , Experimental design , Relevance assessment
Journal title
Information Processing and Management
Serial Year
2012
Journal title
Information Processing and Management
Record number
1229300
Link To Document