DocumentCode :
1842079
Title :
The little neuron that could
Author :
Andersen, Tim ; Martinez, Tony
Author_Institution :
Brigham Young Univ., Provo, UT, USA
Volume :
3
fYear :
1999
fDate :
1999
Firstpage :
1608
Abstract :
SLP (single layer perceptrons) often exhibit reasonable generalization performance on many problems of interest. However, due to the well known limitations of SLPs very little effort has been made to improve their performance. This paper proposes a method for improving the performance of SLPs called “wagging” (weight averaging). This method involves training several different SLP on the same training data, and then averaging their weights to obtain a single SLP. The performance of the wagged SLP is compared with other more complex learning algorithms (bp, c4.5, ibl, MML, etc) on 15 data sets from real world problem domains. Surprisingly, the wagged SLP has better average generalization performance than any of the other learning algorithms on the problems tested. This result is explained and analyzed. The analysis includes looking at the performance characteristics of the standard delta rule training algorithm for SLPs and the correlation between training and test set scores as training progresses
Keywords :
generalisation (artificial intelligence); learning (artificial intelligence); pattern classification; perceptrons; SLP; classification; delta rule training algorithm; generalization; single layer perceptrons; wagging; weight averaging; Algorithm design and analysis; Error analysis; Machine learning; Machine learning algorithms; Neural networks; Neurons; Performance analysis; Performance evaluation; Testing; Training data;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1999. IJCNN '99. International Joint Conference on
Conference_Location :
Washington, DC
ISSN :
1098-7576
Print_ISBN :
0-7803-5529-6
Type :
conf
DOI :
10.1109/IJCNN.1999.832612
Filename :
832612
Link To Document :
بازگشت