Title :
A connectionist learning with high-order functional networks and its internal representation
Author_Institution :
Dept. of Comput. Sci., Nat. Defense Acad., Yokosuka, Japan
Abstract :
A novel architecture for supervised neural network learning is proposed. The necessary conditions of the network architecture for learning the structures of continuous mappings are obtained. The novel network architecture comprises high-order functional networks with some high-order functional units as input units. High-order functional networks trained with back-propagation can generalize and infer the highly nonlinear structures of the continuous mappings. The internal representation capability of the high-order functional networks is analyzed. Nonlinear mappings can be characterized by the features of their extrema and curvatures. The combination of the high order functional input units and the hidden units makes it possible to realize and learn a proper internal representation of the networks for extracting these features of the continuous mappings. On the basis of these internal representation capabilities, a methodology for determining the network architecture and parameters is proposed
Keywords :
knowledge representation; learning systems; neural nets; back-propagation; connectionist learning; continuous mappings; curvatures; extrema; hidden units; high-order functional networks; input units; internal representation; network architecture; nonlinear structures; supervised neural network learning; training; Artificial intelligence; Artificial neural networks; Backpropagation; Computer architecture; Computer science; Data mining; Distributed processing; Feature extraction; History; Neural networks;
Conference_Titel :
Tools for Artificial Intelligence, 1989. Architectures, Languages and Algorithms, IEEE International Workshop on
Conference_Location :
Fairfax, VA
Print_ISBN :
0-8186-1984-8
DOI :
10.1109/TAI.1989.65365