Author_Institution :
Dept. of Comput. Eng., Bogazici Univ., Istanbul, Turkey
Abstract :
In the case of learning a mapping, it is proposed to build several possible models instead of one, train them all independently on the same task and take a vote over their responses. These networks will converge to different solutions due to using different models, different parameter set sizes or any other factor related to training. Two training methods are used, i.e., grow and learn (GAL), a memory based method, and backpropagation. Several voting schemes are investigated, and their performances are composed in the case of classification on a real world application (recognition of handwritten numerals) and a two-dimensional didactic case. The weights in voting may be interpreted in two ways: the certainty of a network in its output, and in a Bayesian setting, the plausibility, i.e., the prior probability of the model. In all cases tested, the result of voting is better than the results of all of the networks that participated in the voting process
Keywords :
Bayes methods; learning (artificial intelligence); pattern recognition; probability; Bayesian setting; backpropagation; function learning; grow and learn; handwritten numerals; memory based method; parameter set sizes; prior probability; training methods; two-dimensional didactic case; voting schemes; Bayesian methods; Computer errors; Computer networks; Handwriting recognition; Learning systems; Multidimensional systems; Testing; Voting;