• DocumentCode
    288537
  • Title

    Dynamic concept model learns optimal policies

  • Author

    Szepesvári, Cs

  • Author_Institution
    Dept. of Math., Jozsef Attila Univ., Szeged, Hungary
  • Volume
    3
  • fYear
    1994
  • fDate
    27 Jun-2 Jul 1994
  • Firstpage
    1738
  • Abstract
    Dynamic concept model (DCM) is a goal-oriented neural controller, that builds an internal representation of events and chains of events in the form of a directed graph and uses spreading activation for decision making. It is shown, that a special case of DCM is equivalent to reinforcement learning (RL) and is capable of learning the optimal policy in a probabilistic world. The memory and computational requirements of both DCM and RL are analyzed and a special algorithm is introduced, that ensures intentional behavior
  • Keywords
    Markov processes; decision theory; directed graphs; learning (artificial intelligence); neurocontrollers; computational requirements; decision making; directed graph; dynamic concept model; goal-oriented neural controller; intentional behavior; internal representation; optimal policies; probabilistic world; spreading activation; Algorithm design and analysis; Artificial neural networks; Brain modeling; Control systems; Cost function; Decision making; Explosions; Learning; Mathematics; Problem-solving;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on
  • Conference_Location
    Orlando, FL
  • Print_ISBN
    0-7803-1901-X
  • Type

    conf

  • DOI
    10.1109/ICNN.1994.374418
  • Filename
    374418