• DocumentCode
    286755
  • Title

    On recurrent neural networks and representing finite-state recognizers

  • Author

    Goudreau, M.W. ; Giles, C.L.

  • Author_Institution
    Princeton Univ., NJ, USA
  • fYear
    1993
  • fDate
    25-27 May 1993
  • Firstpage
    51
  • Lastpage
    55
  • Abstract
    A discussion on the representational abilities of single layer recurrent neural networks (SLRNNs) is presented. The fact that SLRNNs can not implement all finite-state recognizers is addressed. However, there are methods that can be used to expand the representational abilities of SLRNNs, and some of these are explained. The authors call such systems augmented SLRNNs. Some possibilities for augmented SLRNNs are: adding a layer of feedforward neurons to the SLRNN, allowing the SLRNN to have an extra time step to calculate the solution, and increasing the order of the SLRNN. It is significant that, for some problems, some augmented SLRNNs must actually implement a non-minimal finite-state recognizer that is equivalent to the desired finite-state recognizer. Simulations are performed that demonstrate the use of both a SLRNN and an augmented SLRNN for the problem of learning an odd parity finite-state recognizer using a gradient descent method
  • Keywords
    learning (artificial intelligence); pattern recognition; recurrent neural nets; feedforward neurons; finite-state recognizers; gradient descent method; learning; single layer recurrent neural networks;
  • fLanguage
    English
  • Publisher
    iet
  • Conference_Titel
    Artificial Neural Networks, 1993., Third International Conference on
  • Conference_Location
    Brighton
  • Print_ISBN
    0-85296-573-7
  • Type

    conf

  • Filename
    263258