Title :
On the geometric convergence of neural approximations
Author :
Lavretsky, Eugene
Author_Institution :
Boeing Company-Phantom Works, Huntington Beach, CA, USA
fDate :
3/1/2002 12:00:00 AM
Abstract :
We give upper bounds rates of approximation of a set of functions from a real Hilbert space, using convex greedy iterations. The approximation method was originally proposed and analyzed by Jones (1992). Barron (1993) applied the method to the set of functions computable by single-hidden-layer feedforward neural networks. It was shown that the networks achieve an integrated squared error of order O(1/n), where n is the number of iterations, or equivalently, nodes in the network. Assuming that the functions to be approximated satisfy the so-called δ-angular condition, we show that the corresponding rate of approximation of order O(qn) is achievable, where 0 ⩽ q < 1. Therefore, for the set of functions considered, the reported geometrical rate of approximation is an improvement of Maurey-Jones-Barron´s upper bound result. In the case of orthonormal convex greedy approximations, the δ-angular condition is shown to be equivalent to the geometrically decaying expansion coefficients. In finite dimensions the δ-angular condition is proven to take place for a wide class of functions
Keywords :
Hilbert spaces; convergence of numerical methods; feedforward neural nets; function approximation; iterative methods; Hilbert space; angular condition; approximation rate; convex hull; feedforward neural networks; function approximation; geometric convergence; greedy approximation; iterative method; universal approximation; upper bound; Approximation methods; Computer networks; Convergence; Feedforward neural networks; Hilbert space; Linear approximation; Neural networks; Neurons; Upper bound; Vectors;
Journal_Title :
Neural Networks, IEEE Transactions on