Abstract :
In this article, we introduce a new information system
evaluation method and report on its application to a collaborative
information seeking system, AntWorld. The
key innovation of the new method is to use precisely the
same group of users who work with the system as
judges, a system we call Cross-Evaluation. In the new
method, we also propose to assess the system at the
level of task completion. The obvious potential limitation
of this method is that individuals may be inclined to
think more highly of the materials that they themselves
have found and are almost certain to think more highly
of their own work product than they do of the products
built by others. The keys to neutralizing this problem are
careful design and a corresponding analytical model
based on analysis of variance. We model the several
measures of task completion with a linear model of five
effects, describing the users who interact with the
system, the system used to finish the task, the task
itself, the behavior of individuals as judges, and the selfjudgment
bias. Our analytical method successfully
isolates the effect of each variable. This approach provides
a successful model to make concrete the “threerealities”
paradigm, which calls for “real tasks,” “real
users,” and “real systems.”