Title of article :
Readability as a Source of Measurement Error in Medical Education Assessment
Author/Authors :
harver, dylan north carolina state university - office of academic affairs, USA , royal, kenneth d. north carolina state university - department of clinical sciences, USA
Abstract :
Readability is a measure of the accessibility of a text to a reader. Readability scores should not exceedthe readability levels of the intended audience. To date, the topic of readability has rarely beenexplored in the context of medical education assessment. Thus, the purpose of this pilot study was toinvestigate the potential relationship between readability measures and item difficulty estimates. We used two readability formulas, FOG and FORCAST, based on each formula’s intended purposes and requirements for shorter texts. A sample of 853 multiple-choice questions (MCQs) were obtained and the difficulty values for each item were plotted relative to each item’s readability score. Results indicate an association was present between items with greater difficulty (items answered correctly by 70% or fewer examinees) and items with a readability measure greater than 12.0. We conclude that empirical evidence was discernible to support long-standing theoretical evidence that readability issues may introduce measurement error and consequently threaten score validity.
Keywords :
Assessment , Evaluation , Measurement , Bias , Validity , Psychometrics
Journal title :
Education in Medicine Journal(EIMJ)
Journal title :
Education in Medicine Journal(EIMJ)