• DocumentCode
    3023917
  • Title

    Using simulation to evaluate prediction techniques [for software]

  • Author

    Shepperd, Martin ; Kadoda, Gada

  • Author_Institution
    Empirical Software Eng. Res. Group, Bournemouth Univ., Poole, UK
  • fYear
    2001
  • fDate
    2001
  • Firstpage
    349
  • Lastpage
    359
  • Abstract
    The need for accurate software prediction systems increases as software becomes larger and more complex. A variety of techniques have been proposed, but none has proved consistently accurate. The underlying characteristics of the data set influence the choice of the prediction system to be used. It has proved difficult to obtain significant results over small data sets; consequently, we required large validation data sets. Moreover, we wished to control the characteristics of such data sets in order to systematically explore the relationship between accuracy, choice of prediction system and data set characteristics. Our solution has been to simulate data, allowing both control and the possibility of large validation cases. We compared regression, rule induction and nearest neighbours (a form of case-based reasoning). The results suggest that there are significant differences depending upon the characteristics of the data set. Consequently, researchers should consider the prediction context when evaluating competing prediction systems. We also observed that the more “messy” the data and the more complex the relationship with the dependent variable, the more variability in the results. This became apparent since we sampled two different training sets from each simulated population of data. In the more complex cases, we observed significantly different results depending upon the training set. This suggests that researchers will need to exercise caution when comparing different approaches and utilise procedures such as bootstrapping in order to generate multiple samples for training purposes
  • Keywords
    computer aided software engineering; forecasting theory; inference mechanisms; software metrics; statistical analysis; virtual machines; accuracy; bootstrapping; case-based reasoning; data set characteristics; dependent variable relationship; large validation data sets; multiple sample generation; nearest neighbours; prediction context; regression; results variability; rule induction; simulation; software prediction techniques evaluation; training sets; Accuracy; Computational modeling; Control systems; Design engineering; Predictive models; Software engineering; Software systems; Statistical analysis; Systems engineering and theory; Uncertainty;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Software Metrics Symposium, 2001. METRICS 2001. Proceedings. Seventh International
  • Conference_Location
    London
  • ISSN
    1530-1435
  • Print_ISBN
    0-7695-1043-4
  • Type

    conf

  • DOI
    10.1109/METRIC.2001.915542
  • Filename
    915542