DocumentCode :
250729
Title :
Reinforcement learning with multi-fidelity simulators
Author :
Cutler, Mark ; Walsh, Thomas J. ; How, Jonathan P.
Author_Institution :
Lab. of Inf. & Decision Syst., Massachusetts Inst. of Technol., Cambridge, MA, USA
fYear :
2014
fDate :
May 31 2014-June 7 2014
Firstpage :
3888
Lastpage :
3895
Abstract :
We present a framework for reinforcement learning (RL) in a scenario where multiple simulators are available with decreasing amounts of fidelity to the real-world learning scenario. Our framework is designed to limit the number of samples used in each successively higher-fidelity/cost simulator by allowing the agent to choose to run trajectories at the lowest level that will still provide it with information. The approach transfers state-action Q-values from lower-fidelity models as heuristics for the “Knows What It Knows” family of RL algorithms, which is applicable over a wide range of possible dynamics and reward representations. Theoretical proofs of the framework´s sample complexity are given and empirical results are demonstrated on a remote controlled car with multiple simulators. The approach allows RL algorithms to find near-optimal policies for the real world with fewer expensive real-world samples than previous transfer approaches or learning without simulators.
Keywords :
automobiles; control engineering computing; learning (artificial intelligence); mobile robots; telerobotics; trajectory control; RL algorithms; multifidelity simulators; reinforcement learning; remote controlled car; robotic control algorithm; state-action Q-values; trajectory level; Complexity theory; Data models; Heuristic algorithms; Learning (artificial intelligence); Optimization; Polynomials; Silicon;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Robotics and Automation (ICRA), 2014 IEEE International Conference on
Conference_Location :
Hong Kong
Type :
conf
DOI :
10.1109/ICRA.2014.6907423
Filename :
6907423
Link To Document :
بازگشت