• DocumentCode
    3565714
  • Title

    Dynamic neural networks with the use of divide and conquer

  • Author

    Romaniuk, Steve G. ; Hall, Lawrence O.

  • Author_Institution
    Dept. of Comput. Sci. & Eng., Univ. of South Florida, Tampa, FL, USA
  • Volume
    1
  • fYear
    1992
  • Firstpage
    658
  • Abstract
    An algorithm called divide and conquer neural networks that creates a feedforward neural network during training based upon the training examples is described. In addition to learning the weights for connections, it learns an architecture that enables it to learn the examples. Training is done on the inputs to one cell at a time with learned weights being frozen. Error is never propagated backwards through a hidden cell. Examples of the algorithm´s performance on the exclusive-OR and the Iris plant data, which contain two nonlinearly separable classes, are given. The results show that this algorithm can effectively learn a viable architecture in which training examples may be encoded and generalized for later use in classification
  • Keywords
    feedforward neural nets; learning (artificial intelligence); Iris plant data; divide and conquer; dynamic neural nets; encoding; exclusive-OR; feedforward neural network; nonlinearly separable classes; pattern classification; training; Backpropagation algorithms; Computer architecture; Computer science; Computer vision; Convergence; Detectors; Electronic mail; Feedforward neural networks; Iris; Neural networks;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks, 1992. IJCNN., International Joint Conference on
  • Print_ISBN
    0-7803-0559-0
  • Type

    conf

  • DOI
    10.1109/IJCNN.1992.287112
  • Filename
    287112