Title :
Randomness in generalization ability: a source to improve it
Author_Institution :
Dept. of Math. & Comput. Sci., Miami Univ., Coral Gables, FL, USA
fDate :
5/1/1996 12:00:00 AM
Abstract :
Among several models of neurons and their interconnections, feedforward artificial neural networks (FFANNs) are most popular, because of their simplicity and effectiveness. Difficulties such as long learning time and local minima may not affect FFANNs as much as the question of generalization ability, because a network needs only one training, and then it may be used for a long time. This paper reports our observations about randomness in generalization ability of FFANNs. A novel method for measuring generalization ability is defined. This method can be used to identify degree of randomness in generalization ability of learning systems. If an FFANN architecture shows randomness in generalization ability for a given problem, multiple networks can be used to improve it. We have developed a model, called voting model, for predicting generalization ability of multiple networks. It has been shown that if correct classification probability of a single network is greater than half, then as the number of networks in a voting network is increased so does its generalization ability. Further analysis has shown that VC-dimension of the voting network model may increase monotonically as the number of networks in the voting networks is increased
Keywords :
feedforward neural nets; generalisation (artificial intelligence); learning (artificial intelligence); pattern classification; probability; random processes; classification; feedforward neural networks; generalization ability; learning systems; learning time; probability; randomness; voting model; Artificial neural networks; Biological neural networks; Counting circuits; Information processing; Intelligent networks; Learning systems; Neurons; Predictive models; System testing; Voting;
Journal_Title :
Neural Networks, IEEE Transactions on