Title :
Feed-forward neural networks
Author :
Bebis, George ; Georgiopoulos, Michael
Author_Institution :
Dept. of Electr. & Comput. Eng., Central Florida Univ., Orlando, FL, USA
Abstract :
One critical aspect neural network designers face today is choosing an appropriate network size for a given application. Network size involves in the case of layered neural network architectures, the number of layers in a network, the number of nodes per layer, and the number of connections. Roughly speaking, a neural network implements a nonlinear mapping of u=G(x). The mapping function G is established during a training phase where the network learns to correctly associate input patterns x to output patterns u. Given a set of training examples (x, u), there is probably an infinite number of different size networks that can learn to map input patterns x into output patterns u. The question is, which network size is more appropriate for a given problem? Unfortunately, the answer to this question is not always obvious. Many researchers agree that the quality of a solution found by a neural network depends strongly on the network size used. In general, network size affects network complexity, and learning time. It also affects the generalization capabilities of the network; that is, its ability-to produce accurate results on patterns outside its training set.<>
Keywords :
feedforward neural nets; learning (artificial intelligence); multilayer perceptrons; connections; feed-forward neural networks; input patterns; layered neural network architectures; learning time; mapping function; network complexity; network size; nodes; nonlinear mapping; output patterns; training examples; training set; Boolean functions; Curve fitting; Feedforward neural networks; Feedforward systems; Neural networks; Polynomials; Training data;
Journal_Title :
Potentials, IEEE