• DocumentCode
    2249678
  • Title

    Mini-crowdsourcing end-user assessment of intelligent assistants: A cost-benefit study

  • Author

    Shinsel, Amber ; Kulesza, Todd ; Burnett, Margaret ; Curran, William ; Groce, Alex ; Stumpf, Simone ; Wong, Weng-Keen

  • Author_Institution
    Oregon State Univ., Corvallis, OR, USA
  • fYear
    2011
  • fDate
    18-22 Sept. 2011
  • Firstpage
    47
  • Lastpage
    54
  • Abstract
    Intelligent assistants sometimes handle tasks too important to be trusted implicitly. End users can establish trust via systematic assessment, but such assessment is costly. This paper investigates whether, when, and how bringing a small crowd of end users to bear on the assessment of an intelligent assistant is useful from a cost/benefit perspective. Our results show that a mini-crowd of testers supplied many more benefits than the obvious decrease in workload, but these benefits did not scale linearly as mini-crowd size increased - there was a point of diminishing returns where the cost-benefit ratio became less attractive.
  • Keywords
    cost-benefit analysis; intelligent design assistants; program testing; cost-benefit ratio; cost-benefit study; diminishing returns; intelligent assistants; mini-crowd size; mini-crowdsourcing end-user assessment; systematic assessment; Analysis of variance; Educational institutions; Reliability; Software; Software testing; Systematics; crowdsourcing; end-user programming; machine learning; testing;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Visual Languages and Human-Centric Computing (VL/HCC), 2011 IEEE Symposium on
  • Conference_Location
    Pittsburgh, PA
  • ISSN
    1943-6092
  • Print_ISBN
    978-1-4577-1246-3
  • Type

    conf

  • DOI
    10.1109/VLHCC.2011.6070377
  • Filename
    6070377