Title :
Hidden neuron pruning for multilayer perceptrons using a sensitivity measure
Author :
Yeung, Daniel S. ; Zeng, Xiao-qin
Author_Institution :
Dept. of Comput., Hong Kong Polytech. Univ., Kowloon, China
Abstract :
In the design of neural networks, the way to choose the proper size of a network for a given task is an important and difficult problem that still deserves further exploration. One popular approach for tackling this problem is to first use an oversized network and then pruning it to a smaller size so as to achieve less computational complexity and better performance in generalization. This paper presents a pruning technique, via a quantified sensitivity measure, to remove as many neurons as possible, those with the least relevance, from the hidden layers of a multilayer perceptron (MLP). The sensitivity of an individual neuron is defined as the expectation of its output deviation due to expected input deviation with respect to the overall inputs from a continuous interval. The relevance of a neuron is defined as the multiplication of its sensitivity value by the summation of its outgoing weights. The basic idea is to iteratively train the network according to a certain performance criterion and then remove the neurons with the lowest relevance values. The pruning technique is novel in its quantified sensitivity measure. Computer simulations demonstrate that it works well.
Keywords :
generalisation (artificial intelligence); iterative methods; learning (artificial intelligence); multilayer perceptrons; sensitivity analysis; generalization; hidden neuron pruning; input deviation; iterative learning; multilayer perceptron; sensitivity measure; Computational complexity; Computer networks; Computer simulation; Electronic mail; Multi-layer neural network; Multilayer perceptrons; Neural networks; Neurons; Size measurement; Training data;
Conference_Titel :
Machine Learning and Cybernetics, 2002. Proceedings. 2002 International Conference on
Print_ISBN :
0-7803-7508-4
DOI :
10.1109/ICMLC.2002.1175337