DocumentCode :
1549666
Title :
Model-Free Reinforcement Learning of Impedance Control in Stochastic Environments
Author :
Stulp, Freek ; Buchli, Jonas ; Ellmer, Alice ; Mistry, Michael ; Theodorou, Evangelos A. ; Schaal, Stefan
Author_Institution :
Comput. Learning & Motor Control Lab., Univ. of Southern California, Los Angeles, CA, USA
Volume :
4
Issue :
4
fYear :
2012
Firstpage :
330
Lastpage :
341
Abstract :
For humans and robots, variable impedance control is an essential component for ensuring robust and safe physical interaction with the environment. Humans learn to adapt their impedance to specific tasks and environments; a capability which we continually develop and improve until we are well into our twenties. In this article, we reproduce functionally interesting aspects of learning impedance control in humans on a simulated robot platform. As demonstrated in numerous force field tasks, humans combine two strategies to adapt their impedance to perturbations, thereby minimizing position error and energy consumption: 1) if perturbations are unpredictable, subjects increase their impedance through cocontraction; and 2) if perturbations are predictable, subjects learn a feed-forward command to offset the perturbation. We show how a 7-DOF simulated robot demonstrates similar behavior with our model-free reinforcement learning algorithm PI2, by applying deterministic and stochastic force fields to the robot´s end-effector. We show the qualitative similarity between the robot and human movements. Our results provide a biologically plausible approach to learning appropriate impedances purely from experience, without requiring a model of either body or environment dynamics. Not requiring models also facilitates autonomous development for robots, as prespecified models cannot be provided for each environment a robot might encounter.
Keywords :
end effectors; energy consumption; human-robot interaction; learning (artificial intelligence); stochastic systems; 7DOF simulated robot platform; biologically plausible approach; cocontraction; deterministic force fields; energy consumption minimization; force field tasks; learning impedance control; model-free reinforcement learning algorithm; physical interaction; position error minimization; robot autonomous development; robot end-effector; stochastic environments; stochastic force fields; variable impedance control; Biological system modeling; Impedance; Learning; Robots; Robustness; Stochastic processes; Force field experiments; motion primitives; motor system and development; reinforcement learning; robots with development and learning skills; using robots to study development and learning; variable impedance control;
fLanguage :
English
Journal_Title :
Autonomous Mental Development, IEEE Transactions on
Publisher :
ieee
ISSN :
1943-0604
Type :
jour
DOI :
10.1109/TAMD.2012.2205924
Filename :
6227337
Link To Document :
بازگشت