Title :
A logarithmic neural network architecture for unbounded non-linear function approximation
Author_Institution :
Dept. of Nucl. Eng., Tennessee Univ., Knoxville, TN
Abstract :
Multilayer feedforward neural networks with sigmoidal activation functions have been termed “universal function approximators”. Although these types of networks can approximate any continuous function to a desired degree of accuracy, this approximation may require an inordinate number of hidden nodes and is only accurate over a finite interval. These short comings are due to the standard multilayer perceptron´s (MLP) architecture not being well suited to unbounded non-linear function approximation. A new architecture incorporating a logarithmic hidden layer proves to be superior to the standard MLP for unbounded non-linear function approximation. This architecture uses a percentage error objective function and a gradient descent training algorithm. Non-linear function approximation examples are used to show the increased accuracy of this new architecture over both the standard MLP and the logarithmically transformed MLP
Keywords :
feedforward neural nets; function approximation; multilayer perceptrons; gradient descent training algorithm; logarithmic hidden layer; logarithmic neural network architecture; multilayer feedforward neural networks; percentage error objective function; sigmoidal activation functions; unbounded nonlinear function approximation; universal function approximators; Equations; Feedforward neural networks; Function approximation; Multi-layer neural network; Multilayer perceptrons; Neural networks; Neurons; Vectors;
Conference_Titel :
Neural Networks, 1996., IEEE International Conference on
Conference_Location :
Washington, DC
Print_ISBN :
0-7803-3210-5
DOI :
10.1109/ICNN.1996.549076