Title :
On learning with imperfect representations
Author :
Kalyanakrishnan, Shivaram ; Stone, Peter
Author_Institution :
Dept. of Comput. Sci., Univ. of Texas at Austin, Austin, TX, USA
Abstract :
In this paper we present a perspective on the relationship between learning and representation in sequential decision making tasks. We undertake a brief survey of existing real-world applications, which demonstrates that the classical “tabular” representation seldom applies in practice. Specifically, several practical tasks suffer from state aliasing, and most demand some form of generalization and function approximation. Coping with these representational aspects thus becomes an important direction for furthering the advent of reinforcement learning in practice. The central thesis we present in this position paper is that in practice, learning methods specifically developed to work with imperfect representations are likely to perform better than those developed for perfect representations and then applied in imperfect-representation settings. We specify an evaluation criterion for learning methods in practice, and propose a framework for their synthesis. In particular, we highlight the degrees of “representational bias” prevalent in different learning methods. We reference a variety of relevant literature as a background for this introspective essay.
Keywords :
decision making; function approximation; learning (artificial intelligence); function approximation; imperfect representation setting; reinforcement learning; representational bias degree; sequential decision making tasks; state aliasing; Approximation algorithms; Computational modeling; Decision making; Function approximation; Learning systems; Least squares approximation;
Conference_Titel :
Adaptive Dynamic Programming And Reinforcement Learning (ADPRL), 2011 IEEE Symposium on
Conference_Location :
Paris
Print_ISBN :
978-1-4244-9887-1
DOI :
10.1109/ADPRL.2011.5967379