Title :
A non-random data sampling method for classification model assessment
Author :
Sprevak, Dan ; Azuaje, Francisco ; Wang, Haiying
Author_Institution :
Fac. of Eng., Ulster Univ., Jordanstown, UK
Abstract :
Data sampling is a critical factor for building and evaluating the quality of classifiers, such as neural networks. Traditional techniques, such as k-fold cross validation, exhibit limitations when dealing with small data sets. This paper introduces an alternative method that splits the data into training and testing partitions, which have similar statistical characteristics. This method is compared with a traditional technique, using a relatively small dataset and several neural network classifiers. Results suggest that this new technique can reduce variability of predictive accuracies and provide consistent results across different classification models.
Keywords :
learning (artificial intelligence); neural nets; pattern classification; sampling methods; classification model assessment; classifier quality evaluation; neural network classifiers; neural network training; nonrandom data sampling method; statistical analysis; Accuracy; Artificial neural networks; Data engineering; Neural networks; Pattern recognition; Phase estimation; Predictive models; Sampling methods; Testing; Training data;
Conference_Titel :
Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on
Print_ISBN :
0-7695-2128-2
DOI :
10.1109/ICPR.2004.1334552