Title :
Exception-based reinforcement learning
Author_Institution :
Departement Informatique, IRISA/INSA, Rennes, France
Abstract :
In this paper we develop a method using temporally abstract actions to solve Markov decision processes. The basic idea of our method is to define some kind of procedures to control the agent\´s behavior. These procedures contain a rule constraining actions the agent has to choose. This rule is applied except if some conditions (which we call exceptions) are fulfilled. In this case we relax constraints on actions. We develop a way to propagate states that have created an exception to a rule, to help the agent to escape from blocked situations or locally optimal solutions. We illustrate the method using the "Sokoban" game. We compare the method empirically with flat Q-learning. On the proposed tests, learning time is drastically reduced as is the memory required to save the Q-values
Keywords :
Markov processes; exception handling; game theory; learning (artificial intelligence); Markov decision processes; Q-values; Sokoban game; agent escape; agent´s behavior control procedure; blocked situations; exception-based reinforcement learning; flat Q-learning; learning time reduction; locally optimal solutions; rule; temporally abstract actions; Learning; Programming profession; Testing;
Conference_Titel :
Industrial Electronics Society, 2001. IECON '01. The 27th Annual Conference of the IEEE
Conference_Location :
Denver, CO
Print_ISBN :
0-7803-7108-9
DOI :
10.1109/IECON.2001.975612