DocumentCode :
393478
Title :
Reinforcement learning with augmented states in partially expectation and action observable environment
Author :
Guirnaldo, S.A. ; Watanabe, K. ; Izumi, K. ; Kiguchi, K.
Author_Institution :
Graduate Sch. of Sci. & Eng., Saga Univ., Japan
Volume :
2
fYear :
2002
fDate :
5-7 Aug. 2002
Firstpage :
823
Abstract :
The problem of developing good or optimal policies for partially observable Markov decision processes (POMDP) remains one of the most alluring areas of research in artificial intelligence. Encourage by the way how we (humans) form expectations from past experiences and how our decisions and behaviour are affected with our expectations, this paper proposes a method called expectation and action augmented states (EAAS) in reinforcement learning aimed to discover good or near optimal policies in partially observable environment. The method uses the concept of expectation to give distinction between aliased states. It works by augmenting the agent´s observation with its expectation of that observation. Two problems from the literature were used to test the proposed method. The results show promising characteristics of the method as compared to some methods currently being used in this domain.
Keywords :
Markov processes; decision theory; learning (artificial intelligence); artificial intelligence; expectation and action augmented states; partially observable Markov decision processes; reinforcement learning; Artificial intelligence; Control engineering; Humans; Learning systems; Observability; Robot sensing systems; State estimation; Stochastic processes; Systems engineering and theory; Testing;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
SICE 2002. Proceedings of the 41st SICE Annual Conference
Print_ISBN :
0-7803-7631-5
Type :
conf
DOI :
10.1109/SICE.2002.1195264
Filename :
1195264
Link To Document :
بازگشت