Title of article :
Evaluating prediction systems in software project estimation
Author/Authors :
Shepperd، نويسنده , , Martin and MacDonell، نويسنده , , Steve، نويسنده ,
Issue Information :
ماهنامه با شماره پیاپی سال 2012
Pages :
8
From page :
820
To page :
827
Abstract :
Context re engineering has a problem in that when we empirically evaluate competing prediction systems we obtain conflicting results. ive uce the inconsistency amongst validation study results and provide a more formal foundation to interpret results with a particular focus on continuous prediction systems. framework is proposed for evaluating competing prediction systems based upon (1) an unbiased statistic, Standardised Accuracy, (2) testing the result likelihood relative to the baseline technique of random ‘predictions’, that is guessing, and (3) calculation of effect sizes. s usly published empirical evaluations of prediction systems are re-examined and the original conclusions shown to be unsafe. Additionally, even the strongest results are shown to have no more than a medium effect size relative to random guessing. sions accuracy statistics such as MMRE are deprecated. By contrast this new empirical validation framework leads to meaningful results. Such steps will assist in performing future meta-analyses and in providing more robust and usable recommendations to practitioners.
Keywords :
Software Engineering , Prediction system , Empirical validation , Randomisation techniques
Journal title :
Information and Software Technology
Serial Year :
2012
Journal title :
Information and Software Technology
Record number :
2374833
Link To Document :
بازگشت