DocumentCode :
2694194
Title :
Backpropagation representation theorem using power series
Author :
Chen, Mu-Song ; Manry, Michael T.
fYear :
1990
fDate :
17-21 June 1990
Firstpage :
643
Abstract :
A representation theorem is developed for backpropagation neural networks. First, it is assumed that the function to be approximated, F(x) for the vector x, is continuous and has finite support, so that it can be approximated arbitrarily well by a multidimensional power series. The activation function, sigmoid or otherwise, is then approximated by a power-series function of the net. Basic building-block subnetworks, realizing the monomial or product of the inputs, are implemented with any desired degree of accuracy. Each term in the power series for F(x) is realizable using a building block, and each building block has one hidden layer. Hence, the overall network has one hidden layer
Keywords :
learning systems; neural nets; series (mathematics); activation function; backpropagation neural networks; building-block subnetworks; monomial; multidimensional power series; one hidden layer; power-series function; supervised learning;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1990., 1990 IJCNN International Joint Conference on
Conference_Location :
San Diego, CA, USA
Type :
conf
DOI :
10.1109/IJCNN.1990.137643
Filename :
5726603
Link To Document :
بازگشت