DocumentCode :
3088842
Title :
Human-like action segmentation for option learning
Author :
Shim, Jaeeun ; Thomaz, Andrea L.
Author_Institution :
Dept. of Electr. & Comput. En gineering, Georgia Inst. of Technol., Atlanta, GA, USA
fYear :
2011
fDate :
July 31 2011-Aug. 3 2011
Firstpage :
455
Lastpage :
460
Abstract :
Robots learning interactively with a human partner has several open questions, one of which is increasing the efficiency of learning. One approach to this problem in the Reinforcement Learning domain is to use options, temporally extended actions, instead of primitive actions. In this paper, we aim to develop a robot system that can discriminate meaningful options from observations of human use of low-level primitive actions. Our approach is inspired by psychological findings about human action parsing, which posits that we attend to low-level statistical regularities to determine action boundary choices. We implement a human-like action segmentation system for automatic option discovery and evaluate our approach and show that option-based learning converges to the optimal solutions faster compared with primitive-action-based learning.
Keywords :
Markov processes; learning (artificial intelligence); robots; automatic option discovery; human action parsing; human-like action segmentation; low-level statistical regularity; option-based learning; reinforcement learning; robot system; Aggregates; Convergence; Data models; Hidden Markov models; Humans; Probability; Robots;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
RO-MAN, 2011 IEEE
Conference_Location :
Atlanta, GA
Print_ISBN :
978-1-4577-1571-6
Electronic_ISBN :
978-1-4577-1572-3
Type :
conf
DOI :
10.1109/ROMAN.2011.6005277
Filename :
6005277
Link To Document :
بازگشت