DocumentCode :
1190534
Title :
Connectivity and performance tradeoffs in the cascade correlation learning architecture
Author :
Phatak, D.S. ; Koren, I.
Author_Institution :
Dept. of Electr. Eng., State Univ. of New York, Binghamton, NY, USA
Volume :
5
Issue :
6
fYear :
1994
fDate :
11/1/1994 12:00:00 AM
Firstpage :
930
Lastpage :
935
Abstract :
The cascade correlation is a very flexible, efficient and fast algorithm for supervised learning. It incrementally builds the network by adding hidden units one at a time, until the desired input/output mapping is achieved. It connects all the previously installed units to the new unit being added. Consequently, each new unit in effect adds a new layer and the fan-in of the hidden and output units keeps on increasing as more units get added. The resulting structure could be hard to implement in VLSI, because the connections are irregular and the fan-in is unbounded. Moreover, the depth or the propagation delay through the resulting network is directly proportional to the number of units and can be excessive. We have modified the algorithm to generate networks with restricted fan-in and small depth (propagation delay) by controlling the connectivity. Our results reveal that there is a tradeoff between connectivity and other performance attributes like depth, total number of independent parameters, and learning time
Keywords :
feedforward neural nets; learning (artificial intelligence); network topology; parallel architectures; VLSI implementation; cascade correlation; connectivity; hidden units; input/output mapping; learning time; performance tradeoffs; propagation delay; supervised learning; topology; Intelligent networks; Machine learning; Machine learning algorithms; Neural networks; Propagation delay; Supervised learning; Topology; Very large scale integration;
fLanguage :
English
Journal_Title :
Neural Networks, IEEE Transactions on
Publisher :
ieee
ISSN :
1045-9227
Type :
jour
DOI :
10.1109/72.329690
Filename :
329690
Link To Document :
بازگشت