Title of article :
Approximation by neural networks with weights varying on a finite set of directions
Author/Authors :
Ismailov، نويسنده , , Vugar E. Ismailov، نويسنده ,
Issue Information :
دوهفته نامه با شماره پیاپی سال 2012
Abstract :
Approximation properties of the MLP (multilayer feedforward perceptron) model of neural networks have been investigated in a great deal of works over the last 30 years. It has been shown that for a large class of activation functions, a neural network can approximate arbitrarily well any given continuous function. The most significant result on this problem belongs to Leshno, Lin, Pinkus and Schocken. They proved that the necessary and sufficient condition for any single hidden layer network to have the u.a.p. (universal approximation property) is that its activation function not be a polynomial. Some authors (White, Stinchcombe, Ito, and others) showed that a single hidden layer perceptron with some bounded weights can also have the u.a.p. Thus the weights required for u.a.p. are not necessary to be of an arbitrarily large magnitude. But what if they are too restricted? How can one learn approximation properties of networks with arbitrarily restricted set of weights? The current paper makes a first step in solving this general problem. We consider neural networks with sets of weights consisting of a finite number of directions. Our purpose is to characterize compact sets X in the d-dimensional space such that the network can approximate any continuous function over X. In a special case, when weights vary only on two directions, we give a lower bound for the approximation error and find a sufficient condition for the network to be a best approximation.
Keywords :
neural network , MLP model , Weight , Orbit , PATH , Activation function , Density , approximation
Journal title :
Journal of Mathematical Analysis and Applications
Journal title :
Journal of Mathematical Analysis and Applications