Abstract :
While a number of studies have been conducted on the impact of online assessment
and teaching methods on student learning, the field does not seem settled
around the promised benefits of such approaches. It is argued that the reason
for this state of affairs is that few studies have been able to control for a number
of confounding factors in student performance.We report on the introduction
of a regular (every 3 weeks) low-mark online assessment tool in a large, firstyear
business mathematics course at the University of New South Wales, a
major Australian university. Using a retrospective regression methodology
together with a very large and rich data set, we test the proposition that exposure
to the online assessment instrument enhances student learning. Significantly,
we are able to control for prior student aptitude, in-course mastery,
gender and even effort via a voluntary class attendance proxy. Furthermore,
the study incorporates two large, and statistically diverse cohorts as well as
manipulations in the model tested to robustly examine the outcomes. Our
central result is that higher exposure to the online instrument robustly leads to
higher student learning, all else being equal. Various implications for online
assessment design, implementation and targeting are also discussed.